高可用拓扑方案
kubeadm提供了2种部署一个高可用kubernetes集群的不同方式:

Stacked控制平面节点
这种方法所需基础设施较少。etcd成员节点和控制平面节点位于同一节点上。

使用外部etcd节点
这种方法所需基础设施较多。控制平面的节点和etcd成员节点是分开的。

定制控制平面配置
kubeadm通过ClusterConfiguration对象公开了extraArgs字段,它可以覆盖传递给控制平面组件(如APIServer、ControllerManager和Scheduler)的默认参数。

其中各组件配置使用如下字段定义:apiServer、controllerManager、scheduler。

其中extraArgs字段由key: value对组成。覆盖控制平面组件的参数时:

  • 将适当的字段添加到配置中;
  • 向字段添加要覆盖的参数值;
  • 用--config运行kubeadm init。
    创建种子节点的配置文件
    仅在Master1节点操作。

kubeadm config print init-defaults > kubeadm-default-config.yaml

kubeadm-default-config.yaml示例

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:

  • groups:

    • system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      kind: InitConfiguration
      localAPIEndpoint:
      advertiseAddress: 1.2.3.4
      bindPort: 6443
      nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: k8s-prod-m1
      taints:
    • effect: NoSchedule
      key: node-role.kubernetes.io/master

      apiServer:
      timeoutForControlPlane: 4m0s
      apiVersion: kubeadm.k8s.io/v1beta2
      certificatesDir: /etc/kubernetes/pki
      clusterName: kubernetes
      controllerManager: {}
      dns:
      type: CoreDNS
      etcd:
      local:
      dataDir: /var/lib/etcd
      imageRepository: k8s.gcr.io
      kind: ClusterConfiguration
      kubernetesVersion: v1.18.0
      networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
      scheduler: {}
      kubeadm-default-config.yaml组成说明:

  • InitConfiguration:用于定义一些初始化配置,如初始化使用的token以及apiserver地址等;
  • ClusterConfiguration:用于定义apiserver、etcd、network、scheduler、controller-manager等master组件相关配置项;
  • KubeletConfiguration:用于定义kubelet组件相关的配置项;
  • KubeProxyConfiguration:用于定义kube-proxy组件相关的配置项。
    默认的kubeadm-default-config.yaml文件中只有InitConfiguration、ClusterConfiguration两部分。通过如下操作生成另外两部分的示例文件:

生成KubeletConfiguration示例文件: kubeadm-kubelet-config.yaml

kubeadm config print init-defaults --component-configs KubeletConfiguration

生成KubeProxyConfiguration示例文件: kubeadm-kubeproxy-config.yaml

kubeadm config print init-defaults --component-configs KubeProxyConfiguration
根据上述3分示例文件,整合生成kubeadm-config.yaml文件,并根据规划,调整POD、DNS、Cluster IP等配置信息:

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      kind: InitConfiguration
      localAPIEndpoint:

      APIServer程序绑定的IP, 此处为K8S-PROD-M1节点IP

      advertiseAddress: 192.168.122.11

      APIServer程序绑定的端口,默认为6443

      bindPort: 6443
      nodeRegistration:
      criSocket: /var/run/dockershim.sock

      当前节点名称

      name: k8s-prod-m1
      taints:

    • effect: NoSchedule
      key: node-role.kubernetes.io/master

      apiServer:

      APIServer相关配置

      extraArgs:

      审计日志相关配置

      audit-log-maxage: "20"
      audit-log-maxbackup: "10"
      audit-log-maxsize: "100"
      audit-log-path: "/var/log/kube-audit/audit.log"
      audit-policy-file: "/etc/kubernetes/audit-policy.yaml"
      audit-log-format: json

      开启审计日志配置, 所以需要宿主机上的审计配置

      extraVolumes:

    • name: "audit-config"
      hostPath: "/etc/kubernetes/audit-policy.yaml"
      mountPath: "/etc/kubernetes/audit-policy.yaml"
      readOnly: true
      pathType: "File"
    • name: "audit-log"
      hostPath: "/var/log/kube-audit"
      mountPath: "/var/log/kube-audit"
      pathType: "DirectoryOrCreate"
      timeoutForControlPlane: 4m0s
      apiVersion: kubeadm.k8s.io/v1beta2
      certificatesDir: /etc/kubernetes/pki
      clusterName: kubernetes

      APIServer实际访问地址: VIP:PORT

      controlPlaneEndpoint: "192.168.122.40:16443"
      controllerManager: {}
      dns:
      type: CoreDNS
      etcd:
      local:

      Etcd Volume 本地路径,最好修改为独立的磁盘

      dataDir: /data/apps/etcd

      可修改为自己的镜像仓库

      imageRepository: harbor.cluster.local/library
      kind: ClusterConfiguration

      指定部署版本:v1.18.8

      kubernetesVersion: v1.18.8
      networking:
      dnsDomain: cluster.local

      SVC子网,SVC IP 地址取值范围

      serviceSubnet: 10.96.0.0/12

      POD子网,和网络插件配置的要一致

      podSubnet: 10.244.0.0/16
      scheduler: {}

      Kubelet相关配置

      apiVersion: kubelet.config.k8s.io/v1beta1
      kind: KubeletConfiguration
      clusterDNS:

      CoreDNS 默认ip地址

  • 10.96.0.10
    clusterDomain: cluster.local

    Kube-proxy相关配置

    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    mode: "ipvs"
    ipvs:
    minSyncPeriod: 5s
    syncPeriod: 5s

    加权轮询调度

    scheduler: "wrr"
    配置说明:

  • 配置文加中的192.168.122.40为后续配置的HAProxy API代理IP;

  • advertiseAddress: 192.168.122.11与bindPort: 6443为程序绑定的地址与端口;

  • controlPlaneEndpoint: "192.168.122.40:16443" 为实际访问APIServer的地址;

  • 这样配置是为了维持APIServer的HA, 所以LB节点上部署一个HAProxy做4层代理APIServer。

创建日志审计策略文件
vi /etc/kubernetes/audit-policy.yaml

This is required

apiVersion: audit.k8s.io/v1
kind: Policy
omitStages:

  • "RequestReceived"
    rules:
  • level: RequestResponse
    resources:
    • group: ""
      resources: ["pods"]
  • level: Metadata
    resources:

    • group: ""
      resources: ["pods/log", "pods/status"]
  • level: None
    resources:

    • group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]
  • level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:

    • group: "" # core API group
      resources: ["endpoints", "services"]
  • level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:

    • "/api*" # Wildcard matching.
    • "/version"
  • level: Request
    resources:

    • group: "" # core API group
      resources: ["configmaps"]
      namespaces: ["kube-system"]
  • level: Metadata
    resources:

    • group: "" # core API group
      resources: ["secrets", "configmaps"]
  • level: Request
    resources:

    • group: "" # core API group
    • group: "extensions" # Version of group should NOT be included.
  • level: Metadata
    omitStages:
    • "RequestReceived"

准备镜像(可选)

Master和Worker节点上需要准备下面镜像。

  • 需要准备的镜像有

kubead安装后,根据该kubeadm config images list命令判定所需的镜像:

k8s.gcr.io/kube-apiserver:v1.18.8
k8s.gcr.io/kube-controller-manager:v1.18.8
k8s.gcr.io/kube-scheduler:v1.18.8
k8s.gcr.io/kube-proxy:v1.18.8
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
也可根据创建好种子节点的kubeadm-config.yaml文件后,通过下面的命令判断要获取的镜像文件及版本:

获取要下载的镜像

[root@K8S-PROD-M1 workspace]# kubeadm config images list --config kubeadm-config.yaml
W0708 15:55:19.714210 3943 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.8
k8s.gcr.io/kube-controller-manager:v1.18.8
k8s.gcr.io/kube-scheduler:v1.18.8
k8s.gcr.io/kube-proxy:v1.18.8
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

根据配置文件下载镜像到本地

kubeadm config images pull --config kubeadm-config.yaml
这种方式在国内没法直接下载镜像,需要科学上网!

  • Docker Hub上获取

gcr_image_download.sh

#!/bin/bash

images=(
kube-apiserver:v1.18.8
kube-controller-manager:v1.18.8
kube-scheduler:v1.18.8
kube-proxy:v1.18.8
pause:3.2
etcd:3.4.3-0
)

for imageName in ${images[@]} ; do
docker pull aiotceo/$imageName
docker tag aiotceo/$imageName harbor.cluster.local/library/$imageName
docker push harbor.cluster.local/library/$imageName
docker rmi aiotceo/$imageName
done

docker pull coredns/coredns:1.6.7
docker tag coredns/coredns:1.6.7 harbor.cluster.local/library/coredns:1.6.7
docker push harbor.cluster.local/library/coredns:1.6.7
docker rmi coredns/coredns:1.6.7

docker pull quay.io/coreos/flannel:v0.12.0-amd64
docker tag quay.io/coreos/flannel:v0.12.0-amd64 harbor.cluster.local/library/flannel:v0.12.0-amd64
docker push harbor.cluster.local/library/flannel:v0.12.0-amd64
docker rmi quay.io/coreos/flannel:v0.12.0-amd64
说明: 私有Harbor搭建请参考: Harbor部署-Docker Compose方式

初始化集群-初始化种子节点
kubeadm init --config=kubeadm-config.yaml --upload-certs

  • 参数说明

--upload-certs: 会在加入master节点的时候自动拷贝证书

初始化输出
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

kubeadm join 192.168.122.40:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:fb327a5dba129f7cec83d00f54bef6ee1d475a925aaa5573943a584fce4c23f8 \
--control-plane --certificate-key abdfbadb5acc7c7b7868badf53d323a4fc6deef5402603dd75ce602315a183d5

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.122.40:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:fb327a5dba129f7cec83d00f54bef6ee1d475a925aaa5573943a584fce4c23f8

初始化流程

kubeadm init主要执行了以下操作:

  • [init]

指定版本进行初始化操作;

  • [preflight]

初始化前的检查和下载所需要的Docker镜像文件。

  • [kubelet-start]

生成kubelet的配置文件”"var/lib/kubelet/config.yaml",没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动不会成功。

  • [certificates]

生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。

  • [kubeconfig]

生成kubeConfig文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。

  • [control-plane]

使用/etc/kubernetes/manifest目录下的YAML文件,安装Master 组件。

  • [etcd]

使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。

  • [wait-control-plane]

等待control-plan部署的Master组件启动。

  • [apiclient]

检查Master组件服务状态。

  • [uploadconfig]

更新配置。

  • [kubelet]

使用ConfigMap配置kubelet。

  • [patchnode]

更新CNI信息到Node上,通过注释的方式记录。

  • [mark-control-plane]

为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。

  • [bootstrap-token]

生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到。

  • [addons]

安装附加组件CoreDNS和kube-proxy

命令行方式

kubeadm init --apiserver-advertise-address=192.168.122.11 --apiserver-bind-port=6443 --control-plane-endpoint=192.168.122.40 --ignore-preflight-errors=swap --image-repository=harbor.cluster.local/library --kubernetes-version=v1.18.8 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --service-dns-domain=cluster.local

配置kubectl
无论在Master节点或Worker节点,要能够执行kubectl命令则必须进行配置,有下面两种配置方式。

通过配置文件
[root@K8S-PROD-M1 ~]# mkdir -p $HOME/.kube
[root@K8S-PROD-M1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@K8S-PROD-M1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
通过环境变量
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc

source ~/.bashrc
测试kubectl
配置好kubectl后,查看当前集群状态:

[root@K8S-PROD-M1 workspace]# kubectl get no
NAME STATUS ROLES AGE VERSION
K8S-PROD-M1 NotReady master 14m v1.18.4

[root@K8S-PROD-M1 workspace]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}

[root@K8S-PROD-M1 workspace]# kubectl get all --all-namespaces=true
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-86c44fc579-fqsrl 0/1 Pending 0 14m
kube-system pod/coredns-86c44fc579-kdbhs 0/1 Pending 0 14m
kube-system pod/etcd-K8S-PROD-M1 1/1 Running 0 15m
kube-system pod/kube-apiserver-K8S-PROD-M1 1/1 Running 0 15m
kube-system pod/kube-controller-manager-K8S-PROD-M1 1/1 Running 0 15m
kube-system pod/kube-proxy-d89q2 1/1 Running 1 14m
kube-system pod/kube-scheduler-K8S-PROD-M1 1/1 Running 0 15m

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 15m

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 15m

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 0/2 2 0 15m

NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-86c44fc579 2 2 0 14m
由于未安装网络插件,CoreDNS处于pending状态,node处于NotReady状态。执行kubectl get cs命令异常原因:

原因是/etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml设置的默认端口是0。
解決方法:

  • kube-controller-manager

将配置文件(/etc/kubernetes/manifests/kube-controller-manager.yaml)中的--port=0注释掉即可:

[root@K8S-PROD-M1 ~]# vi /etc/kubernetes/manifests/kube-controller-manager.yaml
...

  • --node-cidr-mask-size=24
    #- --port=0
  • --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    ...
  • kube-scheduler

将配置文件(/etc/kubernetes/manifests/kube-scheduler.yaml)中的--port=0注释掉即可:

[root@K8S-PROD-M1 ~]# vi /etc/kubernetes/manifests/kube-scheduler.yaml
...

  • --leader-elect=true
    #- --port=0
    image: harbor.cluster.local/library/kube-scheduler:v1.18.8
    ...
  • 重啓kubelet

systemctl restart kubelet.service

  • 部署网络插件
    Kubernetes支持多种网络方案,这里以部署Fannel为例。

获取部署文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

修改Flannel配置

  • 更新镜像地址

由于kube-flannel.yml文件指定的镜像从coreos镜像仓库拉取,可能拉取失败,替换成自己的镜像仓库:

查看

[root@K8S-PROD-M1 workspace]# grep -i "flannel:" kube-flannel.yml
image: quay.io/coreos/flannel:v0.12.0-amd64
image: quay.io/coreos/flannel:v0.12.0-amd64

替换

[root@K8S-PROD-M1 workspace]# sed -i 's#quay.io/coreos/flannel:v0.12.0-amd64#harbor.cluster.local/library/flannel:v0.12.0-amd64#' kube-flannel.yml

  • 修改Pods的IP段与运行模式

确保net-conf.json中:"Network": "10.244.0.0/16"与规划的一致,"Type": "vxlan":

...
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
...

  • 删除不需要的DaemonSet

根据OS版本,保留kube-flannel-ds-amd64,删除下面不需要的DaemonSet,:kube-flannel-ds-arm、kube-flannel-ds-arm64、kube-flannel-ds-ppc64le、kube-flannel-ds-s390x.

安装Flannel
[root@K8S-PROD-M1 workspace]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
查看集群状态
Node和Pod状态,全部OK, 所有的核心组件都正常运行:

[root@K8S-PROD-M1 workspace]# kubectl get no
NAME STATUS ROLES AGE VERSION
K8S-PROD-M1 Ready master 68m v1.18.8

[root@K8S-PROD-M1 workspace]# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c44fc579-fqsrl 1/1 Running 0 68m
coredns-86c44fc579-kdbhs 1/1 Running 0 68m
etcd-K8S-PROD-M1 1/1 Running 0 69m
kube-apiserver-K8S-PROD-M1 1/1 Running 0 69m
kube-controller-manager-K8S-PROD-M1 1/1 Running 0 69m
kube-flannel-ds-amd64-x9qlj 1/1 Running 0 3m59s
kube-proxy-d89q2 1/1 Running 1 68m
kube-scheduler-K8S-PROD-M1 1/1 Running 0 69m
验证网络
[root@K8S-PROD-M1 workspace]# kubectl run flannel-net-test --image=alpine --replicas=1 sleep 3600
[root@K8S-PROD-M1 workspace]# kubectl get pod
NAME READY STATUS RESTARTS AGE
flannel-net-test-7bd9fb9d88-g2kc8 1/1 Running 0 9s

[root@K8S-PROD-M1 workspace]# kubectl exec -it flannel-net-test-7bd9fb9d88-g2kc8 sh
/ # ifconfig
...
完成Kubernetes集群部署
执行kubeadm init命名后有两条kubeadm join命令,其中--control-plane为加入Master节点命令;另外token有时效性,如果提示token失效,请自行创建一个新的token:

Master节点上创建新的join token

[root@K8S-PROD-M1 kubernetes]# kubeadm token create --print-join-command
W0716 14:52:34.678321 27243 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.122.60:16443 --token 0wbr4z.cp63s7zins4lkcgh --discovery-token-ca-cert-hash sha256:3abcb890f33bf37c9f0d0232df89e4af77353be3154dcb860080f8f46110cefa

Master节点上创建新的certificate-key

[root@K8S-PROD-M1 ~]# kubeadm init phase upload-certs --upload-certs
W0716 14:57:33.514126 32010 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
be4184ac51619ce7fbfc3d913558a869c75fdfb808c18cdc431d9be457f95b61
如果token超时将产生下面异常:

......
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "abcdef"
To see the stack trace of this error execute with --v=5 or higher
如果certificate-key时效超时将产生异常:

......
error execution phase control-plane-prepare/download-certs: error downloading certs: error downloading the secret: Secret "kubeadm-certs" was not found in the "kube-system" Namespace. This Secret might have expired. Please, run kubeadm init phase upload-certs --upload-certs on a control plane to generate a new one
To see the stack trace of this error execute with --v=5 or higher
加入Master节点

  • 创建日志审计策略文件

其他两台服务器创建

ssh K8S-PROD-M2 "mkdir -p /etc/kubernetes/"
ssh K8S-PROD-M3 "mkdir -p /etc/kubernetes/"

  • 拷贝日志审计策略文件

K8S-PROD-M2 节点

scp /etc/kubernetes/audit-policy.yaml K8S-PROD-M2:/etc/kubernetes/

K8S-PROD-M3 节点

scp /etc/kubernetes/audit-policy.yaml K8S-PROD-M3:/etc/kubernetes/

  • 分别join master

先测试API Server连通性

curl -k https://127.0.0.1:6443

返回如下信息

{
"kind": "Status",
"apiVersion": "v1",
"metadata": {

},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {

},
"code": 403
}

  • 增加额外的配置

用于区分不同的Master中的apiserver-advertise-address与apiserver-bind-port:

kubeadm join 192.168.122.40:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:fb327a5dba129f7cec83d00f54bef6ee1d475a925aaa5573943a584fce4c23f8 \
--control-plane --certificate-key abdfbadb5acc7c7b7868badf53d323a4fc6deef5402603dd75ce602315a183d5 \
--apiserver-advertise-address 192.168.122.12 \
--apiserver-bind-port 6443

kubeadm join 192.168.122.40:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:fb327a5dba129f7cec83d00f54bef6ee1d475a925aaa5573943a584fce4c23f8 \
--control-plane --certificate-key abdfbadb5acc7c7b7868badf53d323a4fc6deef5402603dd75ce602315a183d5 \
--apiserver-advertise-address 192.168.122.13 \
--apiserver-bind-port 6443
特别注意:添加第二个Master节点后,执行kubectl命令可能会不正常,这是因为此时ETCD只有2个节点,选举时会产生错误,等到第三个Master即节点部署后,集群状态将正常。

  • 设置kubelet开机启动

[root@K8S-PROD-M1 ~]# systemctl enable kubelet.service && systemctl status kubelet.service
[root@K8S-PROD-M2 ~]# systemctl enable kubelet.service && systemctl status kubelet.service
[root@K8S-PROD-M3 ~]# systemctl enable kubelet.service && systemctl status kubelet.service
加入Worker节点
kubeadm join 192.168.122.40:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:fb327a5dba129f7cec83d00f54bef6ee1d475a925aaa5573943a584fce4c23f8

  • 设置kubelet开机启动

[root@K8S-PROD-W1 ~]# systemctl enable kubelet.service && systemctl status kubelet.service
[root@K8S-PROD-W2 ~]# systemctl enable kubelet.service && systemctl status kubelet.service
[root@K8S-PROD-W3 ~]# systemctl enable kubelet.service && systemctl status kubelet.service

  • 再次查看集群状态

[root@K8S-PROD-M1 pki]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-prod-m1 Ready master 82m v1.18.8
k8s-prod-m2 Ready master 29m v1.18.8
k8s-prod-m3 Ready master 27m v1.18.8
k8s-prod-w1 Ready <none> 5m21s v1.18.8
k8s-prod-w2 Ready <none> 3m53s v1.18.8
k8s-prod-w3 Ready <none> 2m49s v1.18.8

©著作权归作者所有:来自51CTO博客作者mob604756e9d3bc的原创作品,如需转载,请注明出处,否则将追究法律责任

更多相关文章

  1. K8S v1.18.x 部署-Kubeadm方式-4:部署负载均衡
  2. 3-13(树)
  3. K8S v1.18.x 部署-Kubeadm方式-6:重置集群
  4. K8S v1.18.x 部署-Kubeadm方式-7:部署Addon-MetalLB
  5. K8S v1.18.x 部署-Kubeadm方式-10:部署Addon-Helm
  6. K8S v1.18.x 部署-Kubeadm方式-3:部署工具
  7. 多厂商***系列之十二:ASA Dynamic site-to-site IPSec ***
  8. 多厂商***系列之十五:华为USG防火墙实现IPSEC ***的实验【模拟器
  9. 使用RBAC在Kubernetes中配置权限

随机推荐

  1. 【Android 界面效果12】EditText中的多行
  2. Android中做一个无标题窗口
  3. 使用eclipse打开android_sdk自带的例子
  4. Android UI学习 - ListView (android.R.l
  5. android之照相、相冊裁剪功能的实现过程
  6. Android(安卓)8.1 FreeForm切换显示异常
  7. android中调试之日志
  8. Android PackageManagerService的启动过
  9. android sdk setup时出现:Failed to fetc
  10. Android不依赖Activity的全局悬浮窗实现