使用kubeasz部署k8s集群

使用kubeasz部署k8s集群

一、服务器说明

类型 服务器IP 备注
ansible(2台) 172.20.17.23 ansible部署k8s服务器,可以和其他服务器混用
K8S Master(3台) 172.20.17.11/12/13 k8s控制器、通过一个VIP做主备高可用
harbor(1台) 172.20.17.24 Harbor镜像服务器
Etcd(最少3台) 172.20.17.17/18/19 保存k8s集群数据的服务器
hproxy-VIP(2台) 172.20.17.21/22 高可用负载服务器
node节点(2-N台) 172.20.17.14/15/16 真正运行容器的服务器,高可用环境至少两台

二、服务器准备

类型 服务器IP 主机名 VIP
K8S-Master-01 172.20.17.11 k8s-master-11 172.20.17.188
K8S-Master-02 172.20.17.12 k8s-master-12 172.20.17.188
K8S-Master-03 172.20.17.13 k8s-master-13 172.20.17.188
Harbor1 172.20.17.24 harbor-24
etcd-1 172.20.17.17 k8s-etcd-17
etcd-2 172.20.17.18 k8s-etcd-18
etcd-3 172.20.17.19 k8s-etcd-19
HA-1 172.20.17.21 k8s-vip-21
HA-2 172.20.17.22 k8s-vip-22
Node节点1 172.20.17.14 k8s-node-14
Node节点2 172.20.17.15 k8s-node-15
Node节点3 172.20.17.16 k8s-node-16

三、基础环境准备

3.1 基础配置

3.1.1 禁用swap
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# ubuntu 禁用swap 在/etc/fstab里先把/dev/disk/by-uuid/406825f5-6869-44eb-8446-34e069238059 none swap sw 0 0,在sw后面添加noauto字段然后把最后/swap.img注释后保存重启
root@k8s-vip-21:~# vim /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/disk/by-uuid/406825f5-6869-44eb-8446-34e069238059 none swap sw,noauto 0 0
# / was on /dev/sda4 during curtin installation
/dev/disk/by-uuid/33c36444-4b88-4757-af6f-185b1464112b / ext4 defaults 0 0
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/e363ba39-1b8d-45d3-9196-3be2d9f97722 /boot ext4 defaults 0 0
#/swap.img none swap sw 0 0
3.1.2 内核参数优化
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
root@k8s-vip-21:~# cat /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.tcp_tw_reuse = 0
net.core.somaxconn = 32768
net.netfilter.nf_conntrack_max = 1000000
vm.swappiness = 0
vm.max_map_count = 655360
fs.file-max = 6553600
# 在执行sysctl -p 有几项会报错
root@k8s-vip-21:~# sysctl -p
net.ipv4.ip_forward = 1
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-arptables: No such file or directory
net.ipv4.tcp_tw_reuse = 0
net.core.somaxconn = 32768
sysctl: cannot stat /proc/sys/net/netfilter/nf_conntrack_max: No such file or directory
vm.swappiness = 0
vm.max_map_count = 655360
fs.file-max = 6553600
root@k8s-vip-21:~# modprobe br_netfilter
root@k8s-vip-21:~# lsmod |grep conntrack
root@k8s-vip-21:~# modprobe ip_conntrack
root@k8s-vip-21:~# lsmod |grep conntrack
nf_conntrack 139264 0
nf_defrag_ipv6 24576 1 nf_conntrack
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 3 nf_conntrack,btrfs,raid456
root@k8s-vip-21:~# sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.tcp_tw_reuse = 0
net.core.somaxconn = 32768
net.netfilter.nf_conntrack_max = 1000000
vm.swappiness = 0
vm.max_map_count = 655360
fs.file-max = 6553600
3.1.3 资源限制优化
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# 其他机器上都一样这里只复制其中一份即可其他的通过scp拷贝过去即可
root@k8s-vip-21:~# cat /etc/security/limits.conf
# End of file
#
#
* soft core unlimited
* hard core unlimited
* soft nproc 1000000
* hard nproc 1000000
* soft nofile 1000000
* hard nofile 1000000
* soft memlock 32000
* hard memlock 32000
* soft msgqueue 8192000
* hard msgqueue 8192000

root soft core unlimited
root hard core unlimited
root soft nproc 1000000
root hard nproc 1000000
root soft nofile 1000000
root hard nofile 1000000
root soft memlock 32000
root hard memlock 32000
root soft msgqueue 8192000
root hard msgqueue 8192000
3.1.4 高可用HA
3.1.4.1 安装keepalived haproxy服务
1
2
root@k8s-vip-21:~# apt -y install keepalived haproxy
root@k8s-vip-22:~# apt -y install keepalived haproxy
3.1.4.2 配置keepalived
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
root@k8s-vip-21:~# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}

vrrp_instance VI_1 {
state MASTER
interface eth0
garp_master_delay 10
smtp_alert
virtual_router_id 60
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.20.17.188 dev eth0 label eth0:0
}
}
# 启动keepalived服务
root@k8s-vip-21:~# systemctl restart keepalived
# 验证VIP是否已经在,inet 172.20.17.188/32 scope global eth0:0 这一块已经有了
root@k8s-vip-21:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:31:2c:de brd ff:ff:ff:ff:ff:ff
inet 172.20.17.85/24 brd 172.20.17.255 scope global eth0
valid_lft forever preferred_lft forever
inet 172.20.17.188/32 scope global eth0:0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe31:2cde/64 scope link
valid_lft forever preferred_lft forever
# 拷贝至另外一台服务器上
root@k8s-vip-21:~# scp /etc/keepalived/keepalived.conf root@172.20.17.22:/etc/keepalived/keepalived.conf

修改VIP-22

配置另外一台VIP服务器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
root@k8s-vip-22:~# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
garp_master_delay 10
smtp_alert
virtual_router_id 60
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.20.17.188 dev eth0 label eth0:0
}
}
# 启动keepalived服务
root@k8s-vip-22:~# systemctl restart keepalived.service
# 要测试的话需要把85这台机器的keepalived停止后看VIP是否漂移到此机器上即可,这里就不做测试,继续往下配置了
3.1.5 配置haproxy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# 修改haproxy配置文件,先把k8s的api接口地址和端口配置上,还有haproxy的状态页面需要加上
root@k8s-vip-21:~# vim /etc/haproxy/haproxy.cfg
...
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth admin:xw@123.com

listen k8s-api
bind 172.20.17.188:6443
mode tcp
balance roundrobin
server 172.20.17.11 172.20.17.11:6443 check inter 2s fall 3 rise 5
server 172.20.17.12 172.20.17.12:6443 check inter 2s fall 3 rise 5
server 172.20.17.13 172.20.17.13:6443 check inter 2s fall 3 rise 5
# 配置好后启动haproxy服务
root@k8s-vip-21:~# systemctl restart haproxy
root@k8s-vip-21:~# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 490 0.0.0.0:9999 0.0.0.0:*
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:*
LISTEN 0 490 172.20.17.188:6443 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::1]:6010 [::]:*

使用浏览器访问测试haproxy状态页面

认证完登录的界面

k8s-vip-21这台配置好后把配置文件拷贝到k8s-vip-22这台机器上去

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# 拷贝文件到22服务器上
root@k8s-vip-21:~# scp /etc/haproxy/haproxy.cfg root@172.20.17.22:/etc/haproxy/
# 验证拷贝过来的配置文件是否一致
root@k8s-vip-22:~# vim /etc/haproxy/haproxy.cfg
...
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth admin:xw@123.com

listen k8s-api
bind 172.20.17.188:6443
mode tcp
balance roundrobin
server 172.20.17.11 172.20.17.11:6443 check inter 2s fall 3 rise 5
server 172.20.17.12 172.20.17.12:6443 check inter 2s fall 3 rise 5
server 172.20.17.13 172.20.17.13:6443 check inter 2s fall 3 rise 5
# 配置文件没问题后,先不要启动,你直接启的话 它是不行的 VIP的地址没有在当前服务器上,需要添加内核参数让它能正常启动监听服务状态,最好两台服务器上都要添加上
root@k8s-vip-22:~# vim /etc/sysctl.conf
...
net.ipv4.ip_nonlocal_bind = 1
# 添加完后让其生效
root@k8s-vip-22:~# sysctl -p
# 修改完内核参数后再启动服务
root@k8s-vip-22:~# systemctl restart haproxy
root@k8s-vip-22:~# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:*
LISTEN 0 490 172.20.17.188:6443 0.0.0.0:*
LISTEN 0 490 0.0.0.0:9999 0.0.0.0:*
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::1]:6010 [::]:*
LISTEN 0 128 [::]:22 [::]:*
3.1.6 搭建Harbor
3.1.6.1 部署docker

使用二进制部署docker有很多好处例如内网服务器中不可能让每台服务器下载这样会占用网络资源。所以使用二进制版本再结合自动化工具来部署或升级省时又省力

1、下载二进制包

国内阿里云镜像仓库地址: https://mirrors.aliyun.com/docker-ce/linux/static/stable/x86_64/

1
root@harbor-24:~# wget https://mirrors.aliyun.com/docker-ce/linux/static/stable/x86_64/docker-20.10.10.tgz

2、解压docker二进制的压缩包

1
root@harbor-24:~# tar xf docker-20.10.10.tgz

3、查看解压后目录信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
root@harbor-24:~# ls
docker docker-20.10.10.tgz snap
root@harbor-24:~# ll docker
total 200820
drwxrwxr-x 2 dklwj dklwj 4096 Oct 25 2021 ./
drwx------ 6 root root 4096 Jul 9 09:54 ../
-rwxr-xr-x 1 dklwj dklwj 33908392 Oct 25 2021 containerd*
-rwxr-xr-x 1 dklwj dklwj 6508544 Oct 25 2021 containerd-shim*
-rwxr-xr-x 1 dklwj dklwj 8609792 Oct 25 2021 containerd-shim-runc-v2*
-rwxr-xr-x 1 dklwj dklwj 21131264 Oct 25 2021 ctr*
-rwxr-xr-x 1 dklwj dklwj 52885064 Oct 25 2021 docker*
-rwxr-xr-x 1 dklwj dklwj 64763136 Oct 25 2021 dockerd*
-rwxr-xr-x 1 dklwj dklwj 708616 Oct 25 2021 docker-init*
-rwxr-xr-x 1 dklwj dklwj 2784353 Oct 25 2021 docker-proxy*
-rwxr-xr-x 1 dklwj dklwj 14308904 Oct 25 2021 runc*

4、拷贝docker文件

拿到这么多可执行程序后,但是这些文件要搁哪比较合适呢,就要看你的docker.service文件里怎么定义了,这里还是安装官方的默认路径来放,存放到/usr/bin下

1
2
3
4
root@harbor-24:~# cd docker/
root@harbor-24:~/docker# cp -a ./* /usr/bin/
root@harbor-24:~/docker# docker
docker dockerd docker-init docker-proxy

5、编辑docker的service文件

containerd.service docker.service docker.sokect 三个启动文件 也可以通过其他机器上安装好docker中拷贝过来

6、安装docker-compose

docker-compose需要额外在GitHub上下载它的二进制包,下载地址: https://github.com/docker/compose/releases 在上面找到你想下载的版本即可

1
2
3
4
5
# 把上传好的二进制文件作已配置
root@harbor-24:~# chmod a+x docker-compose-Linux-x86_64
root@harbor-24:~# cp -a docker-compose-Linux-x86_64 /usr/bin/docker-compose
root@harbor-24:~# docker-compose -v
docker-compose version 1.28.6, build 5db8d86f
3.1.6.2 安装Harbor

下载所需要的Harbor版本:https://github.com/goharbor/harbor/releases 下载后上传至服务器中

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
# 这里就放到了app目录下
root@harbor-24:~# mkdir /app
root@harbor-24:~# cd /app/
root@harbor-24:/app# ls
harbor-offline-installer-v2.5.0.tgz
root@harbor-24:/app# tar xf harbor-offline-installer-v2.5.0.tgz
# 进入到当前harbor目录中,拷贝模板文件为Harbor.yml
root@harbor-24:/app# cd harbor/
root@harbor-24:/app/harbor# ls
common.sh harbor.v2.5.0.tar.gz harbor.yml.tmpl install.sh LICENSE prepare
root@harbor-24:/app/harbor# cp harbor.yml.tmpl harbor.yml
# 在当前目录下创建数据存放目录
root@harbor-24:/app/harbor# mkdir data
# 修改配置文件,内网中用不到https这里给注释掉了
root@harbor-24:/app/harbor# vim harbor.yml
root@harbor-24:/app/harbor# grep -v "#" harbor.yml | grep -v "^#"
hostname: 172.21.20.24
http:
port: 80
harbor_admin_password: 123456
database:
password: root123
max_idle_conns: 100
max_open_conns: 900

data_volume: /app/harbor/data
# 开始安装--with-trivy 开启镜像的扫描
root@harbor-24:/app/harbor# ./install.sh --with-trivy
.....
[Step 5]: starting Harbor ...
[+] Running 11/11
⠿ Network harbor_harbor Created 0.0s
⠿ Container harbor-log Started 0.4s
⠿ Container harbor-portal Started 1.2s
⠿ Container harbor-db Started 1.0s
⠿ Container registryctl Started 1.2s
⠿ Container redis Started 1.0s
⠿ Container registry Started 1.0s
⠿ Container trivy-adapter Started 3.2s
⠿ Container harbor-core Started 2.1s
⠿ Container nginx Started 4.2s
⠿ Container harbor-jobservice Started 4.2s
✔ ----Harbor has been installed and started successfully.----
root@harbor-24:/app/harbor# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 32768 0.0.0.0:80 0.0.0.0:*
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 32768 127.0.0.1:41595 0.0.0.0:*
LISTEN 0 32768 127.0.0.1:1514 0.0.0.0:*
LISTEN 0 32768 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
root@harbor-24:/app/harbor# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
10c2f17d9800 goharbor/harbor-jobservice:v2.5.3 "/harbor/entrypoint.…" 2 minutes ago Up 2 minutes (healthy) harbor-jobservice
52a03cddcebd goharbor/nginx-photon:v2.5.3 "nginx -g 'daemon of…" 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:80->8080/tcp, :::80->8080/tcp nginx
0027c09e40f2 goharbor/harbor-core:v2.5.3 "/harbor/entrypoint.…" 2 minutes ago Up 2 minutes (healthy) harbor-core
e8897b4d94b1 goharbor/trivy-adapter-photon:v2.5.3 "/home/scanner/entry…" 2 minutes ago Up 2 minutes (healthy) trivy-adapter
a746821f4522 goharbor/redis-photon:v2.5.3 "redis-server /etc/r…" 2 minutes ago Up 2 minutes (healthy) redis
58a7ef9b9048 goharbor/registry-photon:v2.5.3 "/home/harbor/entryp…" 2 minutes ago Up 2 minutes (healthy) registry
248a5dae08ee goharbor/harbor-portal:v2.5.3 "nginx -g 'daemon of…" 2 minutes ago Up 2 minutes (healthy) harbor-portal
1d01ff5de741 goharbor/harbor-db:v2.5.3 "/docker-entrypoint.…" 2 minutes ago Up 2 minutes (healthy) harbor-db
31c0d0a48f2e goharbor/harbor-registryctl:v2.5.3 "/home/harbor/start.…" 2 minutes ago Up 2 minutes (healthy) registryctl
10e969f9a4e8 goharbor/harbor-log:v2.5.3 "/bin/sh -c /usr/loc…" 2 minutes ago Up 2 minutes (healthy) 127.0.0.1:1514->10514/tcp harbor-log

harbor服务起来后通过浏览器访问


这一篇,把前面一些所需要用到的一些环境提前安装完毕,下一章开始安装k8s集群


使用kubeasz部署k8s集群
https://www.dklwj.com/2023/04/Use-kubeasz-to-deploy-k8s-Cluster.html
作者
阿伟
发布于
2023年4月14日
许可协议