部署架构
Kubernetes Master节点主要运行:kube-apiserver、kube-scheduler、kube-controller-manager3个组件。

其中kube-scheduler和kube-controller-manager可以以集群模式运行,通过leader选举产生一个工作进程,其它进程处于阻塞模式。

kube-apiserver可以运行多个实例,但对其它组件需要提供统一的访问地址,该地址需要高可用。本文主要解决kube-apiserver的高可用问题。

本文采用HAProxy+Keepalived实现kube-apiserver的VIP高可用和负载均衡。Keepalived提供kube-apiserver对外服务的VIP高可用,HAProxy监听VIP,后端连接所有kube-apiserver实例,提供健康检查和负载均衡功能。kube-apiserver的默认端口为6443, 为避免冲突, HAProxy监听的端口要与之不同,此实验中为16443。

Keepalived周期性检查本机的HAProxy进程状态,如果检测到HAProxy进程异常,则触发VIP飘移。根据部署规划,所有组件都通过VIP监听的16443端口访问kube-apiserver服务。

配置LB节点
针对LB节点执行部署准备中的设置主机名、配置防火墙、更新系统内核为4.19.x、设置时间同步这4个配置。

安装HA组件
所有LB节点安装HAProxy、Keepalived:

[root@K8S-PROD-LB1 ~]# yum install haproxy keepalived -y
[root@K8S-PROB-LB2 ~]# yum install haproxy keepalived -y
配置HAProxy
LB节点配置HAProxy,执行下面步骤,以K8S-PROD-LB1为例。

备份haproxy.cfg
[root@K8S-PROD-LB1 haproxy]# cd /etc/haproxy/
[root@K8S-PROD-LB1 haproxy]# cp haproxy.cfg{,.bak}
[root@K8S-PROD-LB1 haproxy]# > haproxy.cfg
配置haproxy.cfg
[root@K8S-PROD-LB1 haproxy]# vi haproxy.cfg
global
log 127.0.0.1 local2

chroot /var/lib/haproxypidfile /var/run/haproxy.pidmaxconn 4000user haproxygroup haproxydaemonstats socket /var/lib/haproxy/stats

defaults
mode tcp
log global
option tcplog
option httpclose
option dontlognull
option abortonclose
option redispatch
retries 3
maxconn 32000
timeout connect 5000ms
timeout client 2h
timeout server 2h

listen stats
mode http
bind :10086
stats enable
stats uri /admin?stats
stats auth admin:admin
stats admin if TRUE

frontend k8s_apiserver
bind *:16443
mode tcp
default_backend https_sri

backend https_sri
balance roundrobin
server apiserver1_192_168_122_11 192.168.122.11:6443 check inter 2000 fall 2 rise 2 weight 100
server apiserver2_192_168_122_12 192.168.122.12:6443 check inter 2000 fall 2 rise 2 weight 100
server apiserver3_192_168_122_13 192.168.122.13:6443 check inter 2000 fall 2 rise 2 weight 100
配置K8S-PROD-LB2
[root@K8S-PROD-LB1 haproxy]# scp haproxy.cfg root@192.168.122.22:/etc/haproxy/
配置Keepalived
LB节点配置Keepalived,执行下面步骤,以K8S-PROD-LB1为例。

备份keepalived.conf
[root@K8S-PROD-LB1 haproxy]# cd /etc/keepalived/
[root@K8S-PROD-LB1 keepalived]# cp keepalived.conf{,.bak}
[root@K8S-PROD-LB1 keepalived]# > keepalived.conf
配置keepalived.conf
[root@K8S-PROD-LB1 keepalived]# vi keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {dcfenga@sina.com
br/>dcfenga@sina.com
notification_email_from dcfenga@sina.com
smtp_server mail.cluster.com
smtp_connect_timeout 30
router_id LB_KUBE_APISERVER
}

vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 3
}

vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 60
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
unicast_peer {
192.168.122.31
192.168.122.32
}
virtual_ipaddress {
192.168.122.40/24 label eth0:0
}
track_script {
check_haproxy
}
}
配置K8S-PROD-LB2
其他LB节点的state MASTER字段需改为state BACKUP, priority 100需要设置为90。如果在增加一个LB节点,state需设置成BACKUP, priorit需设置成80,以次类推。

[root@K8S-PROD-LB1 keepalived]# scp keepalived.conf root@192.168.122.32:/etc/keepalived/
[root@K8S-PROD-LB2 keepalived]# vi keepalived.conf
...
state BACKUP
priority 90
...
配置HAProxy存活检查脚本
所有LB节点需要配置检查脚本(check_haproxy.sh), 当HAProxy挂掉后自动停止keepalived:

[root@K8S-PROD-LB1 keepalived]# vi /etc/keepalived/check_haproxy.sh
#!/bin/bash
flag=$(systemctl status haproxy &> /dev/null;echo $?)

if [[ $flag != 0 ]];then
echo "haproxy is down, close the keepalived"
systemctl stop keepalived
fi

[root@K8S-PROD-LB1 keepalived]# chmod +x check_haproxy.sh
[root@K8S-PROD-LB1 keepalived]# scp /etc/keepalived/check_haproxy.sh root@192.168.122.32:/etc/keepalived/
配置keepalived日志至本地文件
vi /etc/sysconfig/keepalived

...

将默认的KEEPALIVED_OPTIONS="-D"修改成

KEEPALIVED_OPTIONS="-D -S 0"

这里的“-S 0”表示local0.* 具体还需要修改下/etc/rsyslog.conf文件

...
vi /etc/rsyslog.conf

参照local7.* /var/log/boot.log,添加配置

Save keepalived log to keeplaived.log

local0.* /var/log/keepalived.log
然后重启该节点。

修改keepalived系统启动服务
[root@K8S-PROD-LB1 keepalived]# vi /usr/lib/systemd/system/keepalived.service
[Unit]
Description=LVS and VRRP High Availability Monitor
After=syslog.target network-online.target
Requires=haproxy.service #增加该字段

[Service]
Type=forking
PIDFile=/var/run/keepalived.pid
KillMode=process
EnvironmentFile=-/etc/sysconfig/keepalived
ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target

[root@K8S-PROD-LB1 keepalived]# scp /usr/lib/systemd/system/keepalived.service root@192.168.122.32:/usr/lib/systemd/system/
Firewalld放行VRRP协议(可选操作)
如果firewalld启动的情况下:

firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --in-interface eth0 --destination 192.168.122.40 --protocol vrrp -j ACCEPT
firewall-cmd --reload
启动LB组件
启动haproxy
[root@K8S-PROD-LB1 keepalived]# systemctl enable haproxy && systemctl start haproxy && systemctl status haproxy
[root@PROD-K8S-LB2 ~]# systemctl enable haproxy && systemctl start haproxy && systemctl status haproxy
启动keepalived
[root@K8S-PROD-LB1 ~]# systemctl enable keepalived && systemctl start keepalived && systemctl status keepalived
[root@PROD-K8S-LB2 ~]# systemctl start keepalived && systemctl enable keepalived && systemctl status keepalived
测试LB

浮动IP漂移测试

  • 查看浮动IP

执行 ip a 命令,查看浮动 IP:

[root@K8S-PROD-LB1 keepalived]# ip a | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.122.31/24 brd 192.168.122.255 scope global eth0
inet 192.168.122.40/24 scope global secondary eth0:0

  • 浮动IP漂移

Master节点宕机后,VIP: 192.168.122.40会飘移至一台Backup节点上;当Master节点恢复后,该VIP有会飘移回去。

查看HAProxy端口
[root@K8S-PROD-LB1 ~]# netstat -ntlp | grep haproxy
tcp 0 0 0.0.0.0:10086 0.0.0.0: LISTEN 21301/haproxy
tcp 0 0 0.0.0.0:16443 0.0.0.0:
LISTEN 21301/haproxy
查看HAProxy服务状态

  • NAT配置

iptables -t nat -A PREROUTING -m tcp -p tcp -d 192.168.191.32 --dport 10086 -j DNAT --to-destination 192.168.122.40:10086
iptables -t nat -A PREROUTING -m tcp -p tcp -d 192.168.191.32 --dport 16443 -j DNAT --to-destination 192.168.122.40:16443

  • 访问Web UI

http://192.168.191.32:10086/admin?stats,登录HAProxy,查看服务是否正常,账户信息是在haproxy.cfg中的listen stats 下stats auth的值:admin:admin,页面输出结果:

©著作权归作者所有:来自51CTO博客作者mob604756e9d3bc的原创作品,如需转载,请注明出处,否则将追究法律责任

更多相关文章

  1. 使用RBAC在Kubernetes中配置权限
  2. CentOS 7部署OpenStack--准备基础环境
  3. Kibana的安装及配置应用
  4. CentOS 7部署OpenStack--部署Newtron(计算节点)
  5. IPFS矿池集群方案详解
  6. CentOS 7部署OpenStack--部署Newtron
  7. mongoDB入门系列之配置解释及错误汇总
  8. linux初始化配置
  9. JDK安装及注意事项

随机推荐

  1. Android OpenGL入门
  2. Android(安卓)Material design设计风格
  3. [android] EditText的setError文字不显示
  4. Android IPC机制(四)用ContentProvider进行
  5. Android 开发大坑汇总(持续更新)
  6. 利用HTML5开发Android
  7. 第九章 Android 系统信息与安全机制
  8. android 监听应用前后台运行状态
  9. android4.4.2 bluetooth解析(二)
  10. android ADB 详解