Kubernetes 这个名字源于希腊语,意为“舵手”或“飞行员”。k8s 这个缩写是因为 k 和 s 之间有八个字符的关系。 Google 在 2014 年开源了 Kubernetes 项目。 Kubernetes 建立在 Google 大规模运行生产工作负载十几年经验的基础上, 结合了社区中最优秀的想法和实践。
因此,虚拟化技术被引入了。虚拟化技术允许你在单个物理服务器的 CPU 上运行多台虚拟机(VM)。 虚拟化能使应用程序在不同 VM 之间被彼此隔离,且能提供一定程度的安全性, 因为一个应用程序的信息不能被另一应用程序随意访问。 虚拟化技术能够更好地利用物理服务器的资源,并且因为可轻松地添加或更新应用程序, 而因此可以具有更高的可扩缩性,以及降低硬件成本等等的好处。 通过虚拟化,你可以将一组物理资源呈现为可丢弃的虚拟机集群。 每个 VM 是一台完整的计算机,在虚拟化硬件之上运行所有组件,包括其自己的操作系统。
容器部署时代:
容器类似于 VM,但是更宽松的隔离特性,使容器之间可以共享操作系统(OS)。 因此,容器比起 VM 被认为是更轻量级的。且与 VM 类似,每个容器都具有自己的文件系统、CPU、内存、进程空间等。 由于它们与基础架构分离,因此可以跨云和 OS 发行版本进行移植。
# ubuntu 禁用swap 在/etc/fstab里先把/dev/disk/by-uuid/406825f5-6869-44eb-8446-34e069238059 none swap sw 0 0,在sw后面添加noauto字段然后把最后/swap.img注释后保存重启 root@web-es-01:~# vim /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <filesystem><mountpoint><type><options><dump><pass> /dev/disk/by-uuid/406825f5-6869-44eb-8446-34e069238059 none swap sw,noauto 0 0 # / was on /dev/sda4 during curtin installation /dev/disk/by-uuid/33c36444-4b88-4757-af6f-185b1464112b / ext4 defaults 0 0 # /boot was on /dev/sda2 during curtin installation /dev/disk/by-uuid/e363ba39-1b8d-45d3-9196-3be2d9f97722 /boot ext4 defaults 0 0 #/swap.img none swap sw 0 0
root@web-es-01:~#kubeadm init \ --apiserver-advertise-address=192.168.137.61 \ --apiserver-bind-port=6443 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.20.15 \ --service-cidr=10.100.0.0/16 \ --service-dns-domain=dklwj.local \ --pod-network-cidr=10.200.0.0/16 \ --ignore-preflight-errors=swap .... To start using your cluster, you need to run the following as a regular user:
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.137.61:6443 --token e7gnqt.cd28ppcup7sh35ee \ --discovery-token-ca-cert-hash sha256:e219bc1c8fd0c09c5c0245509463dff6b3d4aa313511f00d9da9a0ff7061049a # 创建目录拷贝文件 root@web-es-01:~# mkdir /root/.kube -p root@web-es-01:~#cp -i /etc/kubernetes/admin.conf /root/.kube/config root@web-es-01:~# kubectl get node NAME STATUS ROLES AGE VERSION web-es-01 Ready control-plane,master 108s v1.20.15
root@web-node-06:~# kubeadm join 192.168.137.61:6443 --token e7gnqt.cd28ppcup7sh35ee \ --discovery-token-ca-cert-hash sha256:e219bc1c8fd0c09c5c0245509463dff6b3d4aa313511f00d9da9a0ff7061049a # node节点所需要的镜像 root@web-node-06:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE rancher/mirrored-flannelcni-flannel v0.17.0 9247abf08677 3 months ago 59.8MB rancher/mirrored-flannelcni-flannel-cni-plugin v1.0.1 ac40ce625740 4 months ago 8.1MB registry.aliyuncs.com/google_containers/kube-proxy v1.20.15 46e2cd1b2594 4 months ago 99.7MB registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 2 years ago 683kB
2.2.4、查看所有节点
1 2 3 4
root@web-es-01:~# kubectl get node NAME STATUS ROLES AGE VERSION web-es-01 Ready control-plane,master 6h13m v1.20.15 web-node-06 Ready <none> 29m v1.20.15
# 初始化 root@web-es-01:~# kubeadm init \ --apiserver-advertise-address=192.168.137.61 \ --control-plane-endpoint=192.168.137.188 \ --apiserver-bind-port=6443 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.20.15 \ --service-cidr=10.200.0.0/16 \ --service-dns-domain=dklwj.local \ --pod-network-cidr=10.100.0.0/16 \ --ignore-preflight-errors=swap ... To start using your cluster, you need to run the following as a regular user:
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root:
# 加入之前先在第一台Master上使用命令生成一个秘钥 kubeadm init phase upload-certs --upload-certs root@web-es-01:~# kubeadm init phase upload-certs --upload-certs I0612 18:34:34.001592 79845 version.go:254] remote version is much newer: v1.24.1; falling back to: stable-1.20 [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: b924a5671585aa7c49c99077e5ffd56ca64441507d1bf24662e2bff982ebc86d # 02加入到Master高可用集群中 root@web-es-02:~# kubeadm join 192.168.137.188:6443 --token u734bi.xm1559adyu8ku0p0 \ --discovery-token-ca-cert-hash sha256:4327a68caa095b01ce1f2a5f18c5a1da47170e09724c9336e0411f8db5c5f310 \ --control-plane --certificate-key b924a5671585aa7c49c99077e5ffd56ca64441507d1bf24662e2bff982ebc86d ...... To start administering your cluster from this node, you need to run the following as a regular user:
Run 'kubectl get nodes' to see this node join the cluster. # 03加入到Master高可用集群中 root@web-es-03:~# kubeadm join 192.168.137.188:6443 --token u734bi.xm1559adyu8ku0p0 \ --discovery-token-ca-cert-hash sha256:4327a68caa095b01ce1f2a5f18c5a1da47170e09724c9336e0411f8db5c5f310 \ --control-plane --certificate-key b924a5671585aa7c49c99077e5ffd56ca64441507d1bf24662e2bff982ebc86d
# 在01上查看Master信息 root@web-es-01:~# kubectl get node NAME STATUS ROLES AGE VERSION web-es-01 Ready control-plane,master 101m v1.20.15 web-es-02 Ready control-plane,master 84m v1.20.15 web-es-03 Ready control-plane,master 2m27s v1.20.15
root@web-node-06:~# kubeadm join 192.168.137.188:6443 --token u734bi.xm1559adyu8ku0p0 \ --discovery-token-ca-cert-hash sha256:4327a68caa095b01ce1f2a5f18c5a1da47170e09724c9336e0411f8db5c5f310 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster. # 在Master节点上查看现在所有节点运行状态 root@web-es-01:~# kubectl get node NAME STATUS ROLES AGE VERSION web-es-01 Ready control-plane,master 106m v1.20.15 web-es-02 Ready control-plane,master 89m v1.20.15 web-es-03 Ready control-plane,master 7m8s v1.20.15 web-node-06 Ready <none> 27s v1.20.15
# 查看当前集群中证书的有效期 root@k8s-master01:~/pod# kubeadm alpha certs check-expiration Command "check-expiration" is deprecated, please use the same command under "kubeadm certs" [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Jun 15, 2023 02:53 UTC 358d no apiserver Jun 15, 2023 02:53 UTC 358d ca no apiserver-etcd-client Jun 15, 2023 02:53 UTC 358d etcd-ca no apiserver-kubelet-client Jun 15, 2023 02:53 UTC 358d ca no controller-manager.conf Jun 15, 2023 02:53 UTC 358d no etcd-healthcheck-client Jun 15, 2023 02:53 UTC 358d etcd-ca no etcd-peer Jun 15, 2023 02:53 UTC 358d etcd-ca no etcd-server Jun 15, 2023 02:53 UTC 358d etcd-ca no front-proxy-client Jun 15, 2023 02:53 UTC 358d front-proxy-ca no scheduler.conf Jun 15, 2023 02:53 UTC 358d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Jun 12, 2032 02:53 UTC 9y no etcd-ca Jun 12, 2032 02:53 UTC 9y no front-proxy-ca Jun 12, 2032 02:53 UTC 9y no
# 默认证书的有效期为一年,一年后需要更新证书 root@k8s-master01:~/pod# kubeadm alpha certs renew all Command "all" is deprecated, please use the same command under "kubeadm certs" [renew] Reading configuration from the cluster... [renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed certificate for serving the Kubernetes API renewed certificate the apiserver uses to access etcd renewed certificate for the API server to connect to kubelet renewed certificate embedded in the kubeconfig file for the controller manager to use renewed certificate for liveness probes to healthcheck etcd renewed certificate for etcd nodes to communicate with each other renewed certificate for serving etcd renewed certificate for the front proxy client renewed certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates. # 执行完后需要重启相关服务,在重启一台服务前最好把负载均衡上摘除一个 # 更新完后再次查看当前证书的到期时长 root@k8s-master01:~/pod# kubeadm alpha certs check-expiration
Available Commands: apply Upgrade your Kubernetes cluster to the specified version diff Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run node Upgrade commands for a node in the cluster plan Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter
root@k8s-master01:~# kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.20.6 [upgrade/versions] kubeadm version: v1.20.15 I0623 10:18:24.386912 1144148 version.go:254] remote version is much newer: v1.24.2; falling back to: stable-1.20 [upgrade/versions] Latest stable version: v1.20.15 [upgrade/versions] Latest stable version: v1.20.15 [upgrade/versions] Latest version in the v1.20 series: v1.20.15 [upgrade/versions] Latest version in the v1.20 series: v1.20.15
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE kubelet 6 x v1.20.6 v1.20.15
Upgrade to the latest version in the v1.20 series:
The table below shows the current state of component configs as understood by this version of kubeadm. Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED kubeproxy.config.k8s.io v1alpha1 v1alpha1 no kubelet.config.k8s.io v1beta1 v1beta1 no _____________________________________________________________________ # 升级前先把要升级的版本提前下载下来,这样在升级过程中就会很快,不会再去互联网上下载所需镜像文件 root@k8s-master01:~#cat images-download.sh #!/bin/bash docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.20.15 docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.15 docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.15 docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.15 docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0 docker pull registry.aliyuncs.com/google_containers/coredns:1.7.0 docker pull registry.aliyuncs.com/google_containers/pause:3.2 root@k8s-master01:~# bash images-download.sh root@k8s-master02:~# bash images-download.sh root@k8s-master03:~# bash images-download.sh