快速入手安装k8s

快速入手安装k8s

Kubernetes 简介

Kubernetes 这个名字源于希腊语,意为“舵手”或“飞行员”。k8s 这个缩写是因为 k 和 s 之间有八个字符的关系。 Google 在 2014 年开源了 Kubernetes 项目。 Kubernetes 建立在 Google 大规模运行生产工作负载十几年经验的基础上, 结合了社区中最优秀的想法和实践。

时光回溯

让我们回顾一下为何 Kubernetes 能够脱颖而出

传统部署时代:

早期,各个组织是在物理服务器上运行应用程序。 由于无法限制在物理服务器中运行的应用程序资源使用,因此会导致资源分配问题。 例如,如果在同一台物理服务器上运行多个应用程序, 则可能会出现一个应用程序占用大部分资源的情况,而导致其他应用程序的性能下降。 一种解决方案是将每个应用程序都运行在不同的物理服务器上, 但是当某个应用程式资源利用率不高时,剩余资源无法被分配给其他应用程式, 而且维护许多物理服务器的成本很高。

虚拟化部署时代:

因此,虚拟化技术被引入了。虚拟化技术允许你在单个物理服务器的 CPU 上运行多台虚拟机(VM)。 虚拟化能使应用程序在不同 VM 之间被彼此隔离,且能提供一定程度的安全性, 因为一个应用程序的信息不能被另一应用程序随意访问。
虚拟化技术能够更好地利用物理服务器的资源,并且因为可轻松地添加或更新应用程序, 而因此可以具有更高的可扩缩性,以及降低硬件成本等等的好处。 通过虚拟化,你可以将一组物理资源呈现为可丢弃的虚拟机集群。
每个 VM 是一台完整的计算机,在虚拟化硬件之上运行所有组件,包括其自己的操作系统。

容器部署时代:

容器类似于 VM,但是更宽松的隔离特性,使容器之间可以共享操作系统(OS)。 因此,容器比起 VM 被认为是更轻量级的。且与 VM 类似,每个容器都具有自己的文件系统、CPU、内存、进程空间等。 由于它们与基础架构分离,因此可以跨云和 OS 发行版本进行移植。

容器因具有许多优势而变得流行起来,例如:

  • 敏捷应用程序的创建和部署:与使用 VM 镜像相比,提高了容器镜像创建的简便性和效率。

  • 持续开发、集成和部署:通过快速简单的回滚(由于镜像不可变性), 提供可靠且频繁的容器镜像构建和部署。

  • 关注开发与运维的分离:在构建、发布时创建应用程序容器镜像,而不是在部署时, 从而将应用程序与基础架构分离。

  • 可观察性:不仅可以显示 OS 级别的信息和指标,还可以显示应用程序的运行状况和其他指标信号。

  • 跨开发、测试和生产的环境一致性:在笔记本计算机上也可以和在云中运行一样的应用程序。

  • 跨云和操作系统发行版本的可移植性:可在 Ubuntu、RHEL、CoreOS、本地、 Google Kubernetes Engine 和其他任何地方运行。

  • 以应用程序为中心的管理:提高抽象级别,从在虚拟硬件上运行 OS 到使用逻辑资源在 OS 上运行应用程序。

  • 松散耦合、分布式、弹性、解放的微服务:应用程序被分解成较小的独立部分, 并且可以动态部署和管理 - 而不是在一台大型单机上整体运行。

  • 资源隔离:可预测的应用程序性能。

  • 资源利用:高效率和高密度。

架构规划

主机名 系统版本 地址
k8s-master-01 (Ubuntu 20.04) 192.168.137.61
k8s-master-02 (Ubuntu 20.04) 192.168.137.62
k8s-master-03 (Ubuntu 20.04) 192.168.137.63
k8s-slb-01 (Ubuntu 20.04) 192.168.137.64
k8s-slb-02 (Ubuntu 20.04) 192.168.137.65
harbor (Ubuntu 20.04) 192.168.137.66
k8s-node-01 (Ubuntu 20.04) 192.168.137.67
k8s-node-02 (Ubuntu 20.04) 192.168.137.68
k8s-node-03 (Ubuntu 20.04) 192.168.137.69

一、安装前准备

1、禁用swap分区

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# ubuntu 禁用swap 在/etc/fstab里先把/dev/disk/by-uuid/406825f5-6869-44eb-8446-34e069238059 none swap sw 0 0,在sw后面添加noauto字段然后把最后/swap.img注释后保存重启
root@web-es-01:~# vim /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/disk/by-uuid/406825f5-6869-44eb-8446-34e069238059 none swap sw,noauto 0 0
# / was on /dev/sda4 during curtin installation
/dev/disk/by-uuid/33c36444-4b88-4757-af6f-185b1464112b / ext4 defaults 0 0
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/e363ba39-1b8d-45d3-9196-3be2d9f97722 /boot ext4 defaults 0 0
#/swap.img none swap sw 0 0

2、内核参数优化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
root@web-node-01:~# cat /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.tcp_tw_reuse = 0
net.core.somaxconn = 32768
net.netfilter.nf_conntrack_max = 1000000
vm.swappiness = 0
vm.max_map_count = 655360
fs.file-max = 6553600
# 在执行sysctl -p 有几项会报错
root@k8s-master-61:~# sysctl -p
net.ipv4.ip_forward = 1
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-arptables: No such file or directory
net.ipv4.tcp_tw_reuse = 0
net.core.somaxconn = 32768
sysctl: cannot stat /proc/sys/net/netfilter/nf_conntrack_max: No such file or directory
vm.swappiness = 0
vm.max_map_count = 655360
fs.file-max = 6553600
root@k8s-master-61:~# modprobe br_netfilter
root@k8s-master-61:~# lsmod |grep conntrack
root@k8s-master-61:~# modprobe ip_conntrack
root@k8s-master-61:~# lsmod |grep conntrack
nf_conntrack 139264 0
nf_defrag_ipv6 24576 1 nf_conntrack
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 3 nf_conntrack,btrfs,raid456
root@k8s-master-61:~# sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.tcp_tw_reuse = 0
net.core.somaxconn = 32768
net.netfilter.nf_conntrack_max = 1000000
vm.swappiness = 0
vm.max_map_count = 655360
fs.file-max = 6553600

3、资源限制优化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# 四台机器都一样这里只复制其中一份即可其他的通过scp拷贝过去即可
root@web-es-01:~# cat /etc/security/limits.conf
# End of file
#
#
* soft core unlimited
* hard core unlimited
* soft nproc 1000000
* hard nproc 1000000
* soft nofile 1000000
* hard nofile 1000000
* soft memlock 32000
* hard memlock 32000
* soft msgqueue 8192000
* hard msgqueue 8192000

root soft core unlimited
root hard core unlimited
root soft nproc 1000000
root hard nproc 1000000
root soft nofile 1000000
root hard nofile 1000000
root soft memlock 32000
root hard memlock 32000
root soft msgqueue 8192000
root hard msgqueue 8192000

二、k8s集群安装

2.1、单机模式安装

2.1.0、docker安装
1
2
3
4
5
6
7
8
9
10
11
# 更新apt源
# 1、更新apt
root@web-es-01:~# apt update
# 2、安装所需要依赖组件
root@web-es-01:~# apt -y install apt-transport-https ca-certificates curl software-properties-common
# 3、安装gpg证书
root@web-es-01:~# curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# 4、写入软件源信息
root@web-es-01:~# add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# 5、再次更新apt源,并且安装指定的docker版本
root@web-es-01:~# apt update && apt -y install docker-ce=5:19.03.15~3-0~ubuntu-focal
2.1.1、 安装k8s环境
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# 1、更新apt源
root@web-es-01:~# apt update && apt -y install apt-transport-https
# 安装gpg证书
root@web-es-01:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
# 添加k8s源
root@web-es-01:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
# 更新apt源并安装指定版本的k8s
root@web-es-01:~# apt -y install kubeadm=1.20.6-00 kubelet=1.20.6-00 kubectl=1.20.6-00
# 需要在每台机器上的/etc/hosts添加解析记录,其它机器直接拷贝即可
root@web-es-01:~# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 web-nginx

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

192.168.137.61 web-es-01
192.168.137.62 web-es-02
192.168.137.63 web-es-03
192.168.137.64 web-es-04
192.168.137.65 web-es-01
192.168.137.66 web-es-06
192.168.137.67 web-es-07
192.168.137.68 web-es-08
192.168.137.188 web-node-04

root@web-es-01:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:26:37Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
2.1.2、准备镜像
1
2
3
4
5
6
7
8
root@web-es-01:~# kubeadm config images list --kubernetes-version v1.20.15
k8s.gcr.io/kube-apiserver:v1.20.15
k8s.gcr.io/kube-controller-manager:v1.20.15
k8s.gcr.io/kube-scheduler:v1.20.15
k8s.gcr.io/kube-proxy:v1.20.15
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
2.1.3、下载镜像

建议提前在Master节点上先把上面需要的镜像文件下载好,但由于镜像默认使用的Google的镜像仓库地址,国内无法直接下载,好在国内像阿里云或者清华大学的镜像等站点已经通过某种手段把镜像同步过来了

1
2
3
4
5
6
7
8
9
10
root@web-es-01:~#cat images-download.sh
#!/bin/bash
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.20.15
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.15
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.15
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.15
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.aliyuncs.com/google_containers/coredns:1.7.0
docker pull registry.aliyuncs.com/google_containers/pause:3.2
root@web-es-01:~# bash images-download.sh
2.1.4、初始化单Master节点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
root@web-es-01:~#kubeadm init \
--apiserver-advertise-address=192.168.137.61 \
--apiserver-bind-port=6443 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.20.15 \
--service-cidr=10.100.0.0/16 \
--service-dns-domain=dklwj.local \
--pod-network-cidr=10.200.0.0/16 \
--ignore-preflight-errors=swap
....
To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.137.61:6443 --token e7gnqt.cd28ppcup7sh35ee \
--discovery-token-ca-cert-hash sha256:e219bc1c8fd0c09c5c0245509463dff6b3d4aa313511f00d9da9a0ff7061049a
# 创建目录拷贝文件
root@web-es-01:~# mkdir /root/.kube -p
root@web-es-01:~#cp -i /etc/kubernetes/admin.conf /root/.kube/config
root@web-es-01:~# kubectl get node
NAME STATUS ROLES AGE VERSION
web-es-01 Ready control-plane,master 108s v1.20.15
2.1.5、网络插件安装
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 下载flannel插件地址为 https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml 下载完后修改下pod-network的地址这个地址和初始时的地址要一致--pod-network-cidr=10.200.0.0/16
root@web-es-01:~# vim kube-flannel.yml
net-conf.json: |
{
"Network": "10.200.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
# 创建flannel
root@web-es-01:~# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
# 查看状态
root@web-es-01:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7f89b7bc75-246dc 1/1 Running 0 12m
kube-system coredns-7f89b7bc75-bn6xj 1/1 Running 0 12m
kube-system etcd-web-es-01 1/1 Running 0 12m
kube-system kube-apiserver-web-es-01 1/1 Running 0 12m
kube-system kube-controller-manager-web-es-01 1/1 Running 0 12m
kube-system kube-flannel-ds-tz82b 1/1 Running 0 64s
kube-system kube-proxy-rs2wd 1/1 Running 0 12m
kube-system kube-scheduler-web-es-01 1/1 Running 0 12m
2.1.6、允许Master节点部署pod

默认情况下在Master上是禁止运行pod容器的,但是某些特殊场景除外例如node节点只有一台,虽然Master的状态也是Ready,但是并没有pod调度上去,这是因为默认给添加了taint(污点)的原因

1
2
3
4
5
6
7
8
9
10
11
# 为节点设置taint的语法
kubectl taint node 节点名 key值=value值 :effect effect的值一般是NoSchedule

# 如果要对所有节点设置
kubectl taint nodes --all key值=value值:NoSchedule

# 注意:这里value的值时可以不写的,如果不写语法是以下样子
kubectl taint nodes 节点名 key值=:NoSchedule

# 删除taint
kubectl taint nodes 节点名 key-

2.2、 部署node节点

2.2.1、安装docker容器
1
2
3
4
5
6
7
root@web-node-06:~# apt update
root@web-node-06:~# apt -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
root@web-node-06:~# curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
OK
root@web-node-06:~# add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
root@web-node-06:~# apt update
root@web-node-06:~# apt -y install docker-ce=5:19.03.15~3-0~ubuntu-focal
2.2.2、安装k8s节点
1
2
3
4
5
6
7
8
9
root@web-node-06:~# apt-get update && apt-get install -y apt-transport-https
root@web-node-06:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
root@web-node-06:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
root@web-node-06:~# apt update
# 列出当前所有版本
root@web-node-06:~# apt-cache madison kubelet
root@web-node-06:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
2.2.3、加入至k8s中
1
2
3
4
5
6
7
8
9
root@web-node-06:~# kubeadm join 192.168.137.61:6443 --token e7gnqt.cd28ppcup7sh35ee \
--discovery-token-ca-cert-hash sha256:e219bc1c8fd0c09c5c0245509463dff6b3d4aa313511f00d9da9a0ff7061049a
# node节点所需要的镜像
root@web-node-06:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
rancher/mirrored-flannelcni-flannel v0.17.0 9247abf08677 3 months ago 59.8MB
rancher/mirrored-flannelcni-flannel-cni-plugin v1.0.1 ac40ce625740 4 months ago 8.1MB
registry.aliyuncs.com/google_containers/kube-proxy v1.20.15 46e2cd1b2594 4 months ago 99.7MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 2 years ago 683kB
2.2.4、查看所有节点
1
2
3
4
root@web-es-01:~# kubectl get node
NAME STATUS ROLES AGE VERSION
web-es-01 Ready control-plane,master 6h13m v1.20.15
web-node-06 Ready <none> 29m v1.20.15

2.3、高可用模式

2.3.1、先安装keepalived和haproxy

2.3.1.1、安装服务

1
2
root@web-node-04:~# apt -y install keepalived haproxy
root@web-node-05:~# apt -y install keepalived haproxy

2.3.1.2、配置keepalived

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
# 找到模板文件
root@web-node-04:/etc/keepalived# find / -name keepalived.conf*
root@web-node-04:/etc/keepalived# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
root@web-node-04:/etc/keepalived# vim /etc/keepalived/keepalived.conf
root@web-node-04:/etc/keepalived# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}

vrrp_instance VI_1 {
state MASTER
interface eth0
garp_master_delay 10
smtp_alert
virtual_router_id 60
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.137.188 dev eth0 label eth0:0
192.168.137.189 dev eth0 label eth0:1
192.168.137.190 dev eth0 label eth0:2
192.168.137.191 dev eth0 label eth0:3
}
}
# 启动keepalived服务
root@web-node-04:/etc/keepalived# systemctl restart keepalived
# 查看vip是否生效
root@web-node-04:/etc/keepalived# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:44:dd:8b brd ff:ff:ff:ff:ff:ff
inet 192.168.137.64/24 brd 192.168.137.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.137.188/32 scope global eth0:0
valid_lft forever preferred_lft forever
inet 192.168.137.189/32 scope global eth0:1
valid_lft forever preferred_lft forever
inet 192.168.137.190/32 scope global eth0:2
valid_lft forever preferred_lft forever
inet 192.168.137.191/32 scope global eth0:3
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe44:dd8b/64 scope link
valid_lft forever preferred_lft forever
# 第一台测试启动没问题把配置文件拷贝到第二台上
root@web-node-04:/etc/keepalived# scp /etc/keepalived/keepalived.conf 192.168.137.65:/etc/keepalived/keepalived.conf
# 修改第二台配置文件
root@web-node-05:~# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}

vrrp_instance VI_1 {
state BACKUP
interface eth0
garp_master_delay 10
smtp_alert
virtual_router_id 60
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.137.188 dev eth0 label eth0:0
192.168.137.189 dev eth0 label eth0:1
192.168.137.190 dev eth0 label eth0:2
192.168.137.191 dev eth0 label eth0:3
}
}
# 启动keepalived服务
root@web-node-05:~# systemctl restart keepalived

2.3.1.3、测试keepalived是否能正常

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# 找台机器一直ping vip地址
root@web-es-03:~# ping 192.168.137.188
PING 192.168.137.188 (192.168.137.188) 56(84) bytes of data.
64 bytes from 192.168.137.188: icmp_seq=1 ttl=64 time=0.463 ms
64 bytes from 192.168.137.188: icmp_seq=2 ttl=64 time=0.245 ms
64 bytes from 192.168.137.188: icmp_seq=3 ttl=64 time=0.295 ms
64 bytes from 192.168.137.188: icmp_seq=4 ttl=64 time=0.267 ms
....
# 把04机器的keepalived服务停止
root@web-node-04:/etc/keepalived# systemctl stop keepalived
# 只要发现ping的那台机器能一直通说明keepalived服务能正常保持高可用
# 查看05机器的信息
root@web-node-05:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:6b:c5:11 brd ff:ff:ff:ff:ff:ff
inet 192.168.137.65/24 brd 192.168.137.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.137.188/32 scope global eth0:0
valid_lft forever preferred_lft forever
inet 192.168.137.189/32 scope global eth0:1
valid_lft forever preferred_lft forever
inet 192.168.137.190/32 scope global eth0:2
valid_lft forever preferred_lft forever
inet 192.168.137.191/32 scope global eth0:3
valid_lft forever preferred_lft forever
....
2.3.2、配置haproxy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# 服务已经安装只需要设置好配置文件即可
oot@web-node-04:~# vim /etc/haproxy/haproxy.cfg
listen k8s-api
bind 192.168.137.188:6443
mode tcp
balance roundrobin
server 192.168.137.61 192.168.137.61:6443 check inter 2s fall 3 rise 5
server 192.168.137.62 192.168.137.62:6443 check inter 2s fall 3 rise 5
server 192.168.137.63 192.168.137.63:6443 check inter 2s fall 3 rise 5
# 把配置文件拷贝到05上去
root@web-node-04:~# scp /etc/haproxy/haproxy.cfg 192.168.137.65:/etc/haproxy/haproxy.cfg
root@192.168.137.65's password:
haproxy.cfg 100% 1600 2.7MB/s 00:00
# 查看05配置
root@web-node-05:~# cat /etc/haproxy/haproxy.cfg
......
listen k8s-api
bind 192.168.137.188:6443
mode tcp
balance roundrobin
server 192.168.137.61 192.168.137.61:6443 check inter 2s fall 3 rise 5
server 192.168.137.62 192.168.137.62:6443 check inter 2s fall 3 rise 5
server 192.168.137.63 192.168.137.63:6443 check inter 2s fall 3 rise 5
# 不急于启动这个时候你启动也是起不来的需要修改下内核参数 net.ipv4.ip_nonlocal_bind 这个参数的值需要为1否则没法绑定vip
root@web-node-05:~# sysctl -a | grep net.ipv4.ip_nonlocal_bind
net.ipv4.ip_nonlocal_bind = 0
root@web-node-05:~# cat /etc/sysctl.conf
net.iv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.tcp_tw_reuse = 0
net.core.somaxconn = 32768
net.netfilter.nf_conntrack_max = 1000000
vm.swappiness = 0
vm.max_map_count = 655360
fs.file-max = 6553600
net.ipv4.ip_nonlocal_bind = 1
# 让其生效
root@web-node-05:~# sysctl -p
# 启动haproxy 服务
root@web-node-05:~# systemctl restart haproxy
# 查看监听状态可以看到vip已经监听着
root@web-node-05:~# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:*
LISTEN 0 490 192.168.137.188:6443 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::1]:6010 [::]:*
2.3.3、配置Master高可用

2.3.3.1、初始化第一台Master

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# 初始化
root@web-es-01:~# kubeadm init \
--apiserver-advertise-address=192.168.137.61 \
--control-plane-endpoint=192.168.137.188 \
--apiserver-bind-port=6443 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.20.15 \
--service-cidr=10.200.0.0/16 \
--service-dns-domain=dklwj.local \
--pod-network-cidr=10.100.0.0/16 \
--ignore-preflight-errors=swap
...
To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join 192.168.137.188:6443 --token u734bi.xm1559adyu8ku0p0 \
--discovery-token-ca-cert-hash sha256:4327a68caa095b01ce1f2a5f18c5a1da47170e09724c9336e0411f8db5c5f310 \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.137.188:6443 --token u734bi.xm1559adyu8ku0p0 \
--discovery-token-ca-cert-hash sha256:4327a68caa095b01ce1f2a5f18c5a1da47170e09724c9336e0411f8db5c5f310
# 创建目录已经拷贝文件
root@web-es-01:~# mkdir -p .kube
root@web-es-01:~# cp -i /etc/kubernetes/admin.conf .kube/config
# 查看node信息,此时状态为ready不是真的ready网络插件都没装,只有等装好网络插件才是真的ready状态
root@web-es-01:~# kubectl get node
NAME STATUS ROLES AGE VERSION
web-es-01 Ready control-plane,master 5m35s v1.20.15
root@web-es-01:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7f89b7bc75-fj5zw 0/1 ContainerCreating 0 7m32s
kube-system coredns-7f89b7bc75-w9zk9 0/1 ContainerCreating 0 7m32s
kube-system etcd-web-es-01 1/1 Running 0 7m51s
kube-system kube-apiserver-web-es-01 1/1 Running 0 7m51s
kube-system kube-controller-manager-web-es-01 1/1 Running 0 7m51s
kube-system kube-proxy-4hh6l 1/1 Running 0 7m32s
kube-system kube-scheduler-web-es-01 1/1 Running 0 7m51s
# 下载flannel插件地址为 https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml 下载完后修改下pod-network的地址这个地址和初始时的地址要一致--pod-network-cidr=10.200.0.0/16
root@web-es-01:~# vim kube-flannel.yml
net-conf.json: |
{
"Network": "10.100.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
# 安装flannel网络插件
root@web-es-01:~# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
root@web-es-01:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7f89b7bc75-fj5zw 1/1 Running 0 10m
kube-system coredns-7f89b7bc75-w9zk9 1/1 Running 0 10m
kube-system etcd-web-es-01 1/1 Running 0 10m
kube-system kube-apiserver-web-es-01 1/1 Running 0 10m
kube-system kube-controller-manager-web-es-01 1/1 Running 0 10m
kube-system kube-flannel-ds-pth8d 1/1 Running 0 42s
kube-system kube-proxy-4hh6l 1/1 Running 0 10m
kube-system kube-scheduler-web-es-01 1/1 Running 0 10m

2.3.3.2、加入其它两台Master节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# 加入之前先在第一台Master上使用命令生成一个秘钥 kubeadm init phase upload-certs --upload-certs
root@web-es-01:~# kubeadm init phase upload-certs --upload-certs
I0612 18:34:34.001592 79845 version.go:254] remote version is much newer: v1.24.1; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
b924a5671585aa7c49c99077e5ffd56ca64441507d1bf24662e2bff982ebc86d
# 02加入到Master高可用集群中
root@web-es-02:~# kubeadm join 192.168.137.188:6443 --token u734bi.xm1559adyu8ku0p0 \
--discovery-token-ca-cert-hash sha256:4327a68caa095b01ce1f2a5f18c5a1da47170e09724c9336e0411f8db5c5f310 \
--control-plane --certificate-key b924a5671585aa7c49c99077e5ffd56ca64441507d1bf24662e2bff982ebc86d
......
To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
# 03加入到Master高可用集群中
root@web-es-03:~# kubeadm join 192.168.137.188:6443 --token u734bi.xm1559adyu8ku0p0 \
--discovery-token-ca-cert-hash sha256:4327a68caa095b01ce1f2a5f18c5a1da47170e09724c9336e0411f8db5c5f310 \
--control-plane --certificate-key b924a5671585aa7c49c99077e5ffd56ca64441507d1bf24662e2bff982ebc86d

# 在01上查看Master信息
root@web-es-01:~# kubectl get node
NAME STATUS ROLES AGE VERSION
web-es-01 Ready control-plane,master 101m v1.20.15
web-es-02 Ready control-plane,master 84m v1.20.15
web-es-03 Ready control-plane,master 2m27s v1.20.15
2.3.4、配置node节点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
root@web-node-06:~# kubeadm join 192.168.137.188:6443 --token u734bi.xm1559adyu8ku0p0 \
--discovery-token-ca-cert-hash sha256:4327a68caa095b01ce1f2a5f18c5a1da47170e09724c9336e0411f8db5c5f310
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 在Master节点上查看现在所有节点运行状态
root@web-es-01:~# kubectl get node
NAME STATUS ROLES AGE VERSION
web-es-01 Ready control-plane,master 106m v1.20.15
web-es-02 Ready control-plane,master 89m v1.20.15
web-es-03 Ready control-plane,master 7m8s v1.20.15
web-node-06 Ready <none> 27s v1.20.15

2.4、安装dashboard

github地址:https://github.com/kubernetes/dashboard

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# 下载配置文件到本地
root@web-es-01:~/dashboard# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
root@web-es-01:~/dashboard# mv recommended.yaml dashboard-2.4.0.yaml
# 里头有两个镜像文件需要提前下载下来
root@web-es-01:~/dashboard# docker pull kubernetesui/dashboard:v2.4.0
root@web-es-01:~/dashboard# docker pull kubernetesui/metrics-scraper:v1.0.7
# 里头有个地方需要修改,需要手动暴露端口
root@web-es-01:~/dashboard# vim dashboard-2.4.0.yaml
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort # 这需要加
ports:
- port: 443
targetPort: 8443
nodePort: 32002 #这需要
selector:
k8s-app: kubernetes-dashboard
# 创建kubernetes-dashboard
root@web-es-01:~/dashboard# kubectl apply -f dashboard-2.4.0.yaml
namespace/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
configmap/kubernetes-dashboard-settings unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
deployment.apps/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
deployment.apps/dashboard-metrics-scraper unchanged
# 查看信息
root@web-es-01:~/dashboard# kubectl get pod -A -o wide
.....
kube-system kube-scheduler-web-es-03 1/1 Running 2 25h 192.168.137.63 web-es-03 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-5b8896d7fc-2nnsj 1/1 Running 0 113s 10.100.4.3 web-node-07 <none> <none>
kubernetes-dashboard kubernetes-dashboard-897c7599f-87ph4 1/1 Running 0 113s 10.100.4.2 web-node-07 <none> <none>

找到dashboard所在机器然后打开浏览器访问

创建token

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# 创建认证文件
root@web-es-01:~/dashboard# vim admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
# 创建认证用户
root@web-es-01:~/dashboard# kubectl apply -f admin-user.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
root@web-es-01:~/dashboard# kubectl get secret -A | grep admin
kubernetes-dashboard admin-user-token-ztmmn kubernetes.io/service-account-token 3 41s
# 把token复制出来
root@web-es-01:~/dashboard# kubectl describe secret admin-user-token-ztmmn -n kubernetes-dashboard
Name: admin-user-token-ztmmn
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 5e628b1a-b2fd-431f-9115-bceb385a6768

Type: kubernetes.io/service-account-token

Data
====
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IllyVlhKTkRGVWxFd3gwbUl1eWlNMzI4eUExQjZMbnIxQzRScEZabndzRVkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXp0bW1uIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1ZTYyOGIxYS1iMmZkLTQzMWYtOTExNS1iY2ViMzg1YTY3NjgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.G4WaOeMdmOv73xrFvbuKpbG0sNxYY2jspvzTXcUrPAZIi20FNl0d9RsNVo85XUZYVw61lQdM52JrWXbxBkGHAGs1ChpUEyQkVmL8BAbeLbmH1QZkDiUkD_bRndKiRdi1rGwZsJs9nCiToIWz5yrFZucr3jaf-PVNzhDCsVDL84QTJKKF-Pt1nnz1enNVKS6yJgqMEtqnpygaBXDok3usH0eotPNfBSYglXJJvcDb8pRueTjb2iOJTuwx3LM7WihA0L6DVLWaLwYdu_ZTv5tl8f6xfmOGvzoXeF1D-HKwNpeZSrmitM3Vo6BF0fRVF0GkRX7SQa6uKHndlB25Uja3MQ
ca.crt: 1066 bytes

再次打开之前需要登录界面把token复制过去

登录后主界面

2.5、开始第一个实例

2.5.1、运行Nginx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
# 使用yaml文件创建控制器和service、pod
root@k8s-master01:~/pod# vim nginx-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: prod
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.18.0
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
labels:
app: test-nginx-service-label
name: test-nginx-service
namespace: prod
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30004
selector:
app: nginx
# 创建相关资源
root@k8s-master01:~/pod# kubectl apply -f nginx-pod.yaml
deployment.apps/nginx-deployment created
service/test-nginx-service created
# 查看pod信息
root@k8s-master01:~/pod# kubectl get pod -n prod
NAME READY STATUS RESTARTS AGE
mypod 1/1 Running 3 5d21h
nginx-deployment-67dfd6c8f9-k726j 1/1 Running 0 5m26s
nginx-deployment-67dfd6c8f9-xl6mx 1/1 Running 0 5m26s
xwc-pod 2/2 Running 6 5d21h
# 也可以查看pod具体跑在哪台节点上
root@k8s-master01:~/pod# kubectl get pod -n prod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mypod 1/1 Running 3 5d21h 172.20.17.18 k8s-node18 <none> <none>
nginx-deployment-67dfd6c8f9-k726j 1/1 Running 0 6m8s 10.100.5.2 k8s-node19 <none> <none>
nginx-deployment-67dfd6c8f9-xl6mx 1/1 Running 0 6m8s 10.100.5.3 k8s-node19 <none> <none>
xwc-pod 2/2 Running 6 5d21h 10.100.3.17 k8s-node17 <none> <none>

通过浏览器访问

2.5.2、运行Tomcat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
# 创建Tomcat的yaml
root@k8s-master01:~/pod# cp nginx-pod.yaml tomcat.yaml
root@k8s-master01:~/pod# vim tomcat.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: prod
name: tomcat-deployment
labels:
app: tomcat
spec:
replicas: 2
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
labels:
app: tomcat-nginx-service-label
name: tomcat-nginx-service
namespace: prod
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 30005
selector:
app: tomcat
# 创建资源
root@k8s-master01:~/pod# kubectl apply -f tomcat.yaml
deployment.apps/tomcat-deployment created
service/tomcat-nginx-service created
root@k8s-master01:~/pod# kubectl get pod -n prod
NAME READY STATUS RESTARTS AGE
mypod 1/1 Running 3 5d22h
nginx-deployment-67dfd6c8f9-k726j 1/1 Running 0 54m
nginx-deployment-67dfd6c8f9-xl6mx 1/1 Running 0 54m
tomcat-deployment-6c44f58b47-8s2c5 0/1 ContainerCreating 0 18s
tomcat-deployment-6c44f58b47-k5lr2 0/1 ContainerCreating 0 18s
xwc-pod 2/2 Running 6 5d22h
root@k8s-master01:~/pod# kubectl get pod -n prod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mypod 1/1 Running 3 5d22h 172.20.17.18 k8s-node18 <none> <none>
nginx-deployment-67dfd6c8f9-k726j 1/1 Running 0 54m 10.100.5.2 k8s-node19 <none> <none>
nginx-deployment-67dfd6c8f9-xl6mx 1/1 Running 0 54m 10.100.5.3 k8s-node19 <none> <none>
tomcat-deployment-6c44f58b47-8s2c5 0/1 ContainerCreating 0 23s <none> k8s-node18 <none> <none>
tomcat-deployment-6c44f58b47-k5lr2 0/1 ContainerCreating 0 23s <none> k8s-node19 <none> <none>
xwc-pod 2/2 Running 6 5d22h 10.100.3.17 k8s-node17 <none> <none>
root@k8s-master01:~/pod# kubectl get pod -n prod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mypod 1/1 Running 3 5d22h 172.20.17.18 k8s-node18 <none> <none>
nginx-deployment-67dfd6c8f9-k726j 1/1 Running 0 55m 10.100.5.2 k8s-node19 <none> <none>
nginx-deployment-67dfd6c8f9-xl6mx 1/1 Running 0 55m 10.100.5.3 k8s-node19 <none> <none>
tomcat-deployment-6c44f58b47-8s2c5 1/1 Running 0 98s 10.100.4.14 k8s-node18 <none> <none>
tomcat-deployment-6c44f58b47-k5lr2 1/1 Running 0 98s 10.100.5.4 k8s-node19 <none> <none>
xwc-pod 2/2 Running 6 5d22h 10.100.3.17 k8s-node17 <none> <none>

三、k8s集群管理

3.1、token管理

1
2
3
4
5
# kubeadm token --help
create #创建token,默认有效期24小时
delete #删除token
generate #生成并打印token,但不在服务器上创建,即将token用于其他操作
list # 列出服务器所有的token

3.2、查看证书有效期

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 查看当前集群中证书的有效期
root@k8s-master01:~/pod# kubeadm alpha certs check-expiration
Command "check-expiration" is deprecated, please use the same command under "kubeadm certs"
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jun 15, 2023 02:53 UTC 358d no
apiserver Jun 15, 2023 02:53 UTC 358d ca no
apiserver-etcd-client Jun 15, 2023 02:53 UTC 358d etcd-ca no
apiserver-kubelet-client Jun 15, 2023 02:53 UTC 358d ca no
controller-manager.conf Jun 15, 2023 02:53 UTC 358d no
etcd-healthcheck-client Jun 15, 2023 02:53 UTC 358d etcd-ca no
etcd-peer Jun 15, 2023 02:53 UTC 358d etcd-ca no
etcd-server Jun 15, 2023 02:53 UTC 358d etcd-ca no
front-proxy-client Jun 15, 2023 02:53 UTC 358d front-proxy-ca no
scheduler.conf Jun 15, 2023 02:53 UTC 358d no

CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Jun 12, 2032 02:53 UTC 9y no
etcd-ca Jun 12, 2032 02:53 UTC 9y no
front-proxy-ca Jun 12, 2032 02:53 UTC 9y no

3.3、更新证书有效期

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 默认证书的有效期为一年,一年后需要更新证书
root@k8s-master01:~/pod# kubeadm alpha certs renew all
Command "all" is deprecated, please use the same command under "kubeadm certs"
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
# 执行完后需要重启相关服务,在重启一台服务前最好把负载均衡上摘除一个
# 更新完后再次查看当前证书的到期时长
root@k8s-master01:~/pod# kubeadm alpha certs check-expiration

四、k8s升级

4.1 升级准备

升级k8s集群必须先升级kubeadm的版本至你想要的k8s版本,也就是说kubeadm是k8s升级的准升证,在k8s的所有master节点进行组件升级,将管理端服务kube-controller-manager、kube-apiserver、kube-scheduler、kube-proxy进行版本升级。

4.1.1、当前k8s master版本验证
1
2
root@k8s-master01:~/pod# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.6", GitCommit:"8a62859e515889f07e3e3be6a1080413f17cf2c3", GitTreeState:"clean", BuildDate:"2021-04-15T03:26:21Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}
4.1.2、验证当前k8s node版本
1
2
3
4
5
6
7
8
root@k8s-master01:~/pod# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 7d23h v1.20.6
k8s-master02 Ready control-plane,master 7d22h v1.20.6
k8s-master03 Ready control-plane,master 7d22h v1.20.6
k8s-node17 Ready <none> 7d22h v1.20.6
k8s-node18 Ready <none> 7d22h v1.20.6
k8s-node19 Ready <none> 7d22h v1.20.6

4.2、升级k8s master节点版本

4.2.1、各master安装指定新版kubeadm

升级各个k8s master节点版本

1
2
3
4
5
6
7
8
9
10
11
12
13
# 先查看kubeadm所有列表
root@k8s-master01:~# apt-cache madison kubeadm
# 安装新版本kubeadm,三台master节点都需要安装新版
root@k8s-master01:~# apt -y install kubeadm=1.20.15-00
root@k8s-master02:~# apt -y install kubeadm=1.20.15-00
root@k8s-master03:~# apt -y install kubeadm=1.20.15-00
# 升级完后验证是否为升级后的版本号,三台都要验证下
root@k8s-master01:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:26:37Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-master02:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:26:37Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-master03:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:26:37Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
4.2.2、kubeadm升级
1
2
3
4
5
6
7
8
9
10
11
12
root@k8s-master01:~# kubeadm upgrade --help
Upgrade your cluster smoothly to a newer version with this command

Usage:
kubeadm upgrade [flags]
kubeadm upgrade [command]

Available Commands:
apply Upgrade your Kubernetes cluster to the specified version
diff Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run
node Upgrade commands for a node in the cluster
plan Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter
4.2.3、查看升级计划
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
root@k8s-master01:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.20.6
[upgrade/versions] kubeadm version: v1.20.15
I0623 10:18:24.386912 1144148 version.go:254] remote version is much newer: v1.24.2; falling back to: stable-1.20
[upgrade/versions] Latest stable version: v1.20.15
[upgrade/versions] Latest stable version: v1.20.15
[upgrade/versions] Latest version in the v1.20 series: v1.20.15
[upgrade/versions] Latest version in the v1.20 series: v1.20.15

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
kubelet 6 x v1.20.6 v1.20.15

Upgrade to the latest version in the v1.20 series:

COMPONENT CURRENT AVAILABLE
kube-apiserver v1.20.6 v1.20.15
kube-controller-manager v1.20.6 v1.20.15
kube-scheduler v1.20.6 v1.20.15
kube-proxy v1.20.6 v1.20.15
CoreDNS 1.7.0 1.7.0
etcd 3.4.13-0 3.4.13-0

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.20.15

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
# 升级前先把要升级的版本提前下载下来,这样在升级过程中就会很快,不会再去互联网上下载所需镜像文件
root@k8s-master01:~#cat images-download.sh
#!/bin/bash
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.20.15
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.15
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.15
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.15
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.aliyuncs.com/google_containers/coredns:1.7.0
docker pull registry.aliyuncs.com/google_containers/pause:3.2
root@k8s-master01:~# bash images-download.sh
root@k8s-master02:~# bash images-download.sh
root@k8s-master03:~# bash images-download.sh
4.2.4、执行版本升级
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# 先执行kubeadm的升级操作
root@k8s-master01:~# kubeadm upgrade apply v1.20.15
root@k8s-master02:~# kubeadm upgrade apply v1.20.15
root@k8s-master03:~# kubeadm upgrade apply v1.20.15
# 最后一定要出现success才是真正升级成功
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.20.15". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
# 查看node节点版本信息,这里看的话它还是之前1.20.6的版本 原因是这里看的是kubectl的版本,接下里在所有节点上升级kubectl、kubelet、kubeadm(包括node节点)
root@k8s-master01:~/k8s# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 8d v1.20.6
k8s-master02 Ready control-plane,master 8d v1.20.6
k8s-master03 Ready control-plane,master 8d v1.20.6
k8s-node17 Ready <none> 8d v1.20.6
k8s-node18 Ready <none> 8d v1.20.6
k8s-node19 Ready <none> 8d v1.20.6
# 升级剩下所有节点的组件
root@k8s-master01:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
root@k8s-master02:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
root@k8s-master03:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
root@k8s-master17:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
root@k8s-master18:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
root@k8s-master19:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
# 升级完后再次查看所有节点版本信息
root@k8s-master01:~# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 8d v1.20.15
k8s-master02 Ready control-plane,master 8d v1.20.15
k8s-master03 Ready control-plane,master 8d v1.20.15
k8s-node17 Ready <none> 8d v1.20.15
k8s-node18 Ready <none> 8d v1.20.15
k8s-node19 Ready <none> 8d v1.20.15

快速入手安装k8s
https://www.dklwj.com/2022/12/Quick-start-to-install-k8s.html
作者
阿伟
发布于
2022年12月21日
许可协议