1. 前言

  之前文章安装 kubernetes 集群,都是使用 kubeadm 安装,然鹅很多公司也采用二进制方式搭建集群。这篇文章主要讲解,如何采用二进制包来搭建完整的高可用集群。相比使用 kubeadm 搭建,二进制搭建要繁琐很多,需要自己配置签名证书,每个组件都需要一步步配置安装。
  本文以2021年1月14日官方更新的最新版 v1.20.2 来介绍。
  

2. 环境准备

2.1 机器规划

IP地址机器名称机器配置操作系统机器角色安装软件
172.10.1.11master12C4GCentOS7.6masterkube-apiserver、kube-controller-manager、kube-scheduler、etcd
172.10.1.12msater22C4GCentOS7.6masterkube-apiserver、kube-controller-manager、kube-scheduler、etcd
172.10.1.13master32C4GCentOS7.6masterkube-apiserver、kube-controller-manager、kube-scheduler、etcd
172.10.1.14node12C4GCentOS7.6workerkubelet、kube-proxy
172.10.1.15node22C4GCentOS7.6workerkubelet、kube-proxy
172.10.1.16node22C4GCentOS7.6workerkubelet、kube-proxy
172.10.0.20///负载均衡VIP/

注:此处VIP是采用的云厂商的SLB,你也可以使用haproxy + keepalived的方式实现。
 

2.2 软件版本

软件版本
kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.20.2
kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.20.2
etcdv3.4.13
calicov3.14
coredns1.7.0

 

3. 搭建集群

3.1 机器基本配置

以下配置在6台机器上面操作

3.1.1 修改主机名

修改主机名称:master1、master2、master3、node1、node2、node3
  

3.1.2 配置hosts文件

修改机器的/etc/hosts文件

cat >> /etc/hosts << EOF172.10.1.11 master1172.10.1.12 master2172.10.1.13 master3172.10.1.14 node1172.10.1.15 node2172.10.1.16 node3EOF

  

3.1.3 关闭防火墙和selinux

systemctl stop firewalldsetenforce 0sed -i 's/^SELINUX=.\*/SELINUX=disabled/' /etc/selinux/config

  

3.1.4 关闭交换分区

swapoff -a永久关闭,修改/etc/fstab,注释掉swap一行

  

3.1.5 时间同步

yum install -y chronysystemctl start chronydsystemctl enable chronydchronyc sources

  

3.1.6 修改内核参数

cat > /etc/sysctl.d/k8s.conf << EOFnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system

  

3.1.7 加载ipvs模块

modprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4lsmod | grep ip_vslsmod | grep nf_conntrack_ipv4yum install -y ipvsadm

 

3.2 配置工作目录

  每台机器都需要配置证书文件、组件的配置文件、组件的服务启动文件,现专门选择 master1 来统一生成这些文件,然后再分发到其他机器。以下操作在 master1 上进行

[root@master1 ~]# mkdir -p /data/work注:该目录为配置文件和证书文件生成目录,后面的所有文件生成相关操作均在此目录下进行[root@master1 ~]# ssh-keygen -t rsa -b 2048将秘钥分发到另外五台机器,让 master1 可以免密码登录其他机器

 

3.3 搭建etcd集群

3.3.1 配置etcd工作目录

[root@master1 ~]# mkdir -p /etc/etcd                     # 配置文件存放目录[root@master1 ~]# mkdir -p /etc/etcd/ssl               # 证书文件存放目录

 

3.3.2 创建etcd证书

工具下载

[root@master1 work]# cd /data/work/[root@master1 work]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64[root@master1 work]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64[root@master1 work]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

工具配置

[root@master1 work]# chmod +x cfssl*[root@master1 work]# mv cfssl_linux-amd64 /usr/local/bin/cfssl[root@master1 work]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson[root@master1 work]# mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

配置ca请求文件

[root@master1 work]# vim ca-csr.json {  "CN": "kubernetes",  "key": {      "algo": "rsa",      "size": 2048  },  "names": [    {      "C": "CN",      "ST": "Hubei",      "L": "Wuhan",      "O": "k8s",      "OU": "system"    }  ],  "ca": {          "expiry": "87600h"  }}

注:
CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)

创建ca证书

[root@master1 work]# cfssl gencert -initca ca-csr.json  | cfssljson -bare ca

配置ca证书策略

[root@master1 work]# vim ca-config.json{  "signing": {      "default": {          "expiry": "87600h"        },      "profiles": {          "kubernetes": {              "usages": [                  "signing",                  "key encipherment",                  "server auth",                  "client auth"              ],              "expiry": "87600h"          }      }  }}

配置etcd请求csr文件

[root@master1 work]# vim etcd-csr.json{  "CN": "etcd",  "hosts": [    "127.0.0.1",    "172.10.1.11",    "172.10.1.12",    "172.10.1.13"  ],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [{    "C": "CN",    "ST": "Hubei",    "L": "Wuhan",    "O": "k8s",    "OU": "system"  }]}

生成证书

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd[root@master1 work]# ls etcd*.pemetcd-key.pem  etcd.pem

 

3.3.3 部署etcd集群

下载etcd软件包

[root@master1 work]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz[root@master1 work]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz [root@master1 work]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/[root@master1 work]# rsync -vaz etcd-v3.4.13-linux-amd64/etcd* master2:/usr/local/bin/[root@master1 work]# rsync -vaz etcd-v3.4.13-linux-amd64/etcd* master3:/usr/local/bin/

创建配置文件

[root@master1 work]# vim etcd.conf#[Member]ETCD_NAME="etcd1"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://172.10.1.11:2380"ETCD_LISTEN_CLIENT_URLS="https://172.10.1.11:2379,http://127.0.0.1:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.10.1.11:2380"ETCD_ADVERTISE_CLIENT_URLS="https://172.10.1.11:2379"ETCD_INITIAL_CLUSTER="etcd1=https://172.10.1.11:2380,etcd2=https://172.10.1.12:2380,etcd3=https://172.10.1.13:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"

注:
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
 
创建启动服务文件
方式一:
有配置文件的启动

[root@master1 work]# vim etcd.service[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyEnvironmentFile=-/etc/etcd/etcd.confWorkingDirectory=/var/lib/etcd/ExecStart=/usr/local/bin/etcd \  --cert-file=/etc/etcd/ssl/etcd.pem \  --key-file=/etc/etcd/ssl/etcd-key.pem \  --trusted-ca-file=/etc/etcd/ssl/ca.pem \  --peer-cert-file=/etc/etcd/ssl/etcd.pem \  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \  --peer-client-cert-auth \  --client-cert-authRestart=on-failureRestartSec=5LimitNOFILE=65536[Install]WantedBy=multi-user.target

 
方式二:
无配置文件的启动方式

[root@master1 work]# vim etcd.service[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyWorkingDirectory=/var/lib/etcd/ExecStart=/usr/local/bin/etcd \  --name=etcd1 \  --data-dir=/var/lib/etcd/default.etcd \  --cert-file=/etc/etcd/ssl/etcd.pem \  --key-file=/etc/etcd/ssl/etcd-key.pem \  --trusted-ca-file=/etc/etcd/ssl/ca.pem \  --peer-cert-file=/etc/etcd/ssl/etcd.pem \  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \  --peer-client-cert-auth \  --client-cert-auth \  --listen-peer-urls=https://172.10.1.11:2380 \  --listen-client-urls=https://172.10.1.11:2379,http://127.0.0.1:2379 \  --advertise-client-urls=https://172.10.1.11:2379 \  --initial-advertise-peer-urls=https://172.10.1.11:2380 \  --initial-cluster=etcd1=https://172.10.1.11:2380,etcd2=https://172.10.1.12:2380,etcd3=https://172.10.1.13:2380 \  --initial-cluster-token=etcd-cluster \  --initial-cluster-state=newRestart=on-failureRestartSec=5LimitNOFILE=65536[Install]WantedBy=multi-user.target

注:本文采用第一种方式
  
同步相关文件到各个节点

[root@master1 work]# cp ca*.pem /etc/etcd/ssl/[root@master1 work]# cp etcd*.pem /etc/etcd/ssl/[root@master1 work]# cp etcd.conf /etc/etcd/[root@master1 work]# cp etcd.service /usr/lib/systemd/system/[root@master1 work]# for i in master2 master3;do rsync -vaz etcd.conf $i:/etc/etcd/;done[root@master1 work]# for i in master2 master3;do rsync -vaz etcd*.pem ca*.pem $i:/etc/etcd/ssl/;done[root@master1 work]# for i in master2 master3;do rsync -vaz etcd.service $i:/usr/lib/systemd/system/;done

注:master2和master3分别修改配置文件中etcd名字和ip,并创建目录 /var/lib/etcd/default.etcd
 
启动etcd集群

[root@master1 work]# mkdir -p /var/lib/etcd/default.etcd[root@master1 work]# systemctl daemon-reload[root@master1 work]# systemctl enable etcd.service[root@master1 work]# systemctl start etcd.service[root@master1 work]# systemctl status etcd

注:第一次启动可能会卡一段时间,因为节点会等待其他节点启动
 
查看集群状态

[root@master1 work]# ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://172.10.1.11:2379,https://172.10.1.12:2379,https://172.10.1.13:2379 endpoint health

3.4 kubernetes组件部署

3.4.1 下载安装包

[root@master1 work]# wget https://dl.k8s.io/v1.20.1/kubernetes-server-linux-amd64.tar.gz[root@master1 work]# tar -xf kubernetes-server-linux-amd64.tar [root@master1 work]# cd kubernetes/server/bin/[root@master1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/[root@master1 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master2:/usr/local/bin/[root@master1 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master3:/usr/local/bin/[root@master1 bin]# for i in node1 node2 node3;do rsync -vaz kubelet kube-proxy $i:/usr/local/bin/;done[root@master1 bin]# cd /data/work/

3.4.2 创建工作目录

[root@master1 work]# mkdir -p /etc/kubernetes/          # kubernetes组件配置文件存放目录[root@master1 work]# mkdir -p /etc/kubernetes/ssl     # kubernetes组件证书文件存放目录[root@master1 work]# mkdir /var/log/kubernetes        # kubernetes组件日志文件存放目录

3.4.3 部署api-server

创建csr请求文件

[root@master1 work]# vim kube-apiserver-csr.json{  "CN": "kubernetes",  "hosts": [    "127.0.0.1",    "172.10.1.11",    "172.10.1.12",    "172.10.1.13",    "172.10.1.14",    "172.10.1.15",    "172.10.1.16",    "172.10.0.20",    "10.255.0.1",    "kubernetes",    "kubernetes.default",    "kubernetes.default.svc",    "kubernetes.default.svc.cluster",    "kubernetes.default.svc.cluster.local"  ],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "Hubei",      "L": "Wuhan",      "O": "k8s",      "OU": "system"    }  ]}

注:
如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。
由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.254.0.1)

生成证书和token文件

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver[root@master1 work]# cat > token.csv << EOF$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"EOF

创建配置文件

[root@master1 work]# vim kube-apiserver.confKUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \  --anonymous-auth=false \  --bind-address=172.10.1.11 \  --secure-port=6443 \  --advertise-address=172.10.1.11 \  --insecure-port=0 \  --authorization-mode=Node,RBAC \  --runtime-config=api/all=true \  --enable-bootstrap-token-auth \  --service-cluster-ip-range=10.255.0.0/16 \  --token-auth-file=/etc/kubernetes/token.csv \  --service-node-port-range=30000-50000 \  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \  --client-ca-file=/etc/kubernetes/ssl/ca.pem \  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \    --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \      # 1.20以上版本必须有此参数  --service-account-issuer=https://kubernetes.default.svc.cluster.local \   # 1.20以上版本必须有此参数  --etcd-cafile=/etc/etcd/ssl/ca.pem \  --etcd-certfile=/etc/etcd/ssl/etcd.pem \  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \  --etcd-servers=https://172.10.1.11:2379,https://172.10.1.12:2379,https://172.10.1.13:2379 \  --enable-swagger-ui=true \  --allow-privileged=true \  --apiserver-count=3 \  --audit-log-maxage=30 \  --audit-log-maxbackup=3 \  --audit-log-maxsize=100 \  --audit-log-path=/var/log/kube-apiserver-audit.log \  --event-ttl=1h \  --alsologtostderr=true \  --logtostderr=false \  --log-dir=/var/log/kubernetes \  --v=4"

注:
--logtostderr:启用日志
--v:日志等级
--log-dir:日志目录
--etcd-servers:etcd集群地址
--bind-address:监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserver https证书
--etcd-xxxfile:连接Etcd集群证书
--audit-log-xxx:审计日志
 
创建服务启动文件

[root@master1 work]# vim kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=etcd.serviceWants=etcd.service[Service]EnvironmentFile=-/etc/kubernetes/kube-apiserver.confExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTSRestart=on-failureRestartSec=5Type=notifyLimitNOFILE=65536[Install]WantedBy=multi-user.target

同步相关文件到各个节点

[root@master1 work]# cp ca*.pem /etc/kubernetes/ssl/[root@master1 work]# cp kube-apiserver*.pem /etc/kubernetes/ssl/[root@master1 work]# cp token.csv /etc/kubernetes/[root@master1 work]# cp kube-apiserver.conf /etc/kubernetes/    [root@master1 work]# cp kube-apiserver.service /usr/lib/systemd/system/[root@master1 work]# rsync -vaz token.csv master2:/etc/kubernetes/[root@master1 work]# rsync -vaz token.csv master3:/etc/kubernetes/[root@master1 work]# rsync -vaz kube-apiserver*.pem master2:/etc/kubernetes/ssl/     # 主要rsync同步文件,只能创建最后一级目录,如果ssl目录不存在会自动创建,但是上一级目录kubernetes必须存在[root@master1 work]# rsync -vaz kube-apiserver*.pem master3:/etc/kubernetes/ssl/[root@master1 work]# rsync -vaz ca*.pem master2:/etc/kubernetes/ssl/[root@master1 work]# rsync -vaz ca*.pem master3:/etc/kubernetes/ssl/[root@master1 work]# rsync -vaz kube-apiserver.conf master2:/etc/kubernetes/[root@master1 work]# rsync -vaz kube-apiserver.conf master3:/etc/kubernetes/[root@master1 work]# rsync -vaz kube-apiserver.service master2:/usr/lib/systemd/system/[root@master1 work]# rsync -vaz kube-apiserver.service master3:/usr/lib/systemd/system/

注:master2和master3配置文件的IP地址修改为实际的本机IP
 
启动服务

[root@master1 work]# systemctl daemon-reload[root@master1 work]# systemctl enable kube-apiserver[root@master1 work]# systemctl start kube-apiserver[root@master1 work]# systemctl status kube-apiserver测试[root@master1 work]# curl --insecure https://172.10.1.11:6443/有返回说明启动正常

3.4.4 部署kubectl

创建csr请求文件

[root@master1 work]# vim admin-csr.json{  "CN": "admin",  "hosts": [],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "Hubei",      "L": "Wuhan",      "O": "system:masters",                   "OU": "system"    }  ]}

说明:
后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;
kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;
O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;
注:
这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group;
"O": "system:masters", 必须是system:masters,否则后面kubectl create clusterrolebinding报错。
 
生成证书

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin[root@master1 work]# cp admin*.pem /etc/kubernetes/ssl/

创建kubeconfig配置文件
kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书

设置集群参数[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube.config设置客户端认证参数[root@master1 work]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config设置上下文参数[root@master1 work]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config设置默认上下文[root@master1 work]# kubectl config use-context kubernetes --kubeconfig=kube.config[root@master1 work]# mkdir ~/.kube[root@master1 work]# cp kube.config ~/.kube/config授权kubernetes证书访问kubelet api权限[root@master1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

查看集群组件状态
上面步骤完成后,kubectl就可以与kube-apiserver通信了

[root@master1 work]# kubectl cluster-info[root@master1 work]# kubectl get componentstatuses[root@master1 work]# kubectl get all --all-namespaces

同步kubectl配置文件到其他节点

[root@master1 work]# rsync -vaz /root/.kube/config master2:/root/.kube/[root@master1 work]# rsync -vaz /root/.kube/config master3:/root/.kube/

配置kubectl子命令补全

[root@master1 work]# yum install -y bash-completion[root@master1 work]# source /usr/share/bash-completion/bash_completion[root@master1 work]# source <(kubectl completion bash)[root@master1 work]# kubectl completion bash > ~/.kube/completion.bash.inc[root@master1 work]# source '/root/.kube/completion.bash.inc'  [root@master1 work]# source $HOME/.bash_profile

3.4.5 部署kube-controller-manager

创建csr请求文件

[root@master1 work]# vim kube-controller-manager-csr.json{    "CN": "system:kube-controller-manager",    "key": {        "algo": "rsa",        "size": 2048    },    "hosts": [      "127.0.0.1",      "172.10.1.11",      "172.10.1.12",      "172.10.1.13"    ],    "names": [      {        "C": "CN",        "ST": "Hubei",        "L": "Wuhan",        "O": "system:kube-controller-manager",        "OU": "system"      }    ]}

注:
hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
生成证书

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager[root@master1 work]# ls kube-controller-manager*.pem

创建kube-controller-manager的kubeconfig

设置集群参数[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube-controller-manager.kubeconfig设置客户端认证参数[root@master1 work]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig设置上下文参数[root@master1 work]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig设置默认上下文[root@master1 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

创建配置文件

[root@master1 work]# vim kube-controller-manager.confKUBE_CONTROLLER_MANAGER_OPTS="--port=0 \  --secure-port=10252 \  --bind-address=127.0.0.1 \  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \  --service-cluster-ip-range=10.255.0.0/16 \  --cluster-name=kubernetes \  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \  --allocate-node-cidrs=true \  --cluster-cidr=10.0.0.0/16 \  --experimental-cluster-signing-duration=87600h \  --root-ca-file=/etc/kubernetes/ssl/ca.pem \  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \  --leader-elect=true \  --feature-gates=RotateKubeletServerCertificate=true \  --controllers=*,bootstrapsigner,tokencleaner \  --horizontal-pod-autoscaler-use-rest-clients=true \  --horizontal-pod-autoscaler-sync-period=10s \  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \  --use-service-account-credentials=true \  --alsologtostderr=true \  --logtostderr=false \  --log-dir=/var/log/kubernetes \  --v=2"

创建启动文件

[root@master1 work]# vim kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/kube-controller-manager.confExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failureRestartSec=5[Install]WantedBy=multi-user.target

同步相关文件到各个节点

[root@master1 work]# cp kube-controller-manager*.pem /etc/kubernetes/ssl/[root@master1 work]# cp kube-controller-manager.kubeconfig /etc/kubernetes/[root@master1 work]# cp kube-controller-manager.conf /etc/kubernetes/[root@master1 work]# cp kube-controller-manager.service /usr/lib/systemd/system/[root@master1 work]# rsync -vaz kube-controller-manager*.pem master2:/etc/kubernetes/ssl/[root@master1 work]# rsync -vaz kube-controller-manager*.pem master3:/etc/kubernetes/ssl/[root@master1 work]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master2:/etc/kubernetes/[root@master1 work]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master3:/etc/kubernetes/[root@master1 work]# rsync -vaz kube-controller-manager.service master2:/usr/lib/systemd/system/[root@master1 work]# rsync -vaz kube-controller-manager.service master3:/usr/lib/systemd/system/

启动服务

[root@master1 work]# systemctl daemon-reload [root@master1 work]# systemctl enable kube-controller-manager[root@master1 work]# systemctl start kube-controller-manager[root@master1 work]# systemctl status kube-controller-manager

3.4.6 部署kube-scheduler

创建csr请求文件

[root@master1 work]# vim kube-scheduler-csr.json{    "CN": "system:kube-scheduler",    "hosts": [      "127.0.0.1",      "172.10.1.11",      "172.10.1.12",      "172.10.1.13"    ],    "key": {        "algo": "rsa",        "size": 2048    },    "names": [      {        "C": "CN",        "ST": "Hubei",        "L": "Wuhan",        "O": "system:kube-scheduler",        "OU": "system"      }    ]}

注:
hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

生成证书

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler[root@master1 work]# ls kube-scheduler*.pem

创建kube-scheduler的kubeconfig

设置集群参数[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube-scheduler.kubeconfig设置客户端认证参数[root@master1 work]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig设置上下文参数[root@master1 work]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig设置默认上下文[root@master1 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

创建配置文件

[root@master1 work]# vim kube-scheduler.confKUBE_SCHEDULER_OPTS="--address=127.0.0.1 \--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \--leader-elect=true \--alsologtostderr=true \--logtostderr=false \--log-dir=/var/log/kubernetes \--v=2"

创建服务启动文件

[root@master1 work]# vim kube-scheduler.service[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/kube-scheduler.confExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTSRestart=on-failureRestartSec=5[Install]WantedBy=multi-user.target

同步相关文件到各个节点

[root@master1 work]# cp kube-scheduler*.pem /etc/kubernetes/ssl/[root@master1 work]# cp kube-scheduler.kubeconfig /etc/kubernetes/[root@master1 work]# cp kube-scheduler.conf /etc/kubernetes/[root@master1 work]# cp kube-scheduler.service /usr/lib/systemd/system/[root@master1 work]# rsync -vaz kube-scheduler*.pem master2:/etc/kubernetes/ssl/[root@master1 work]# rsync -vaz kube-scheduler*.pem master3:/etc/kubernetes/ssl/[root@master1 work]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master2:/etc/kubernetes/[root@master1 work]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master3:/etc/kubernetes/[root@master1 work]# rsync -vaz kube-scheduler.service master2:/usr/lib/systemd/system/[root@master1 work]# rsync -vaz kube-scheduler.service master3:/usr/lib/systemd/system/

启动服务

[root@master1 work]# systemctl daemon-reload[root@master1 work]# systemctl enable kube-scheduler[root@master1 work]# systemctl start kube-scheduler[root@master1 work]# systemctl status kube-scheduler

3.4.7 部署docker

在三个work节点上安装
安装docker

[root@node1 ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo[root@node1 ~]# yum install -y docker-ce[root@node1 ~]# systemctl enable docker[root@node1 ~]# systemctl start docker[root@node1 ~]# docker --version

修改docker源和驱动

[root@node1 ~]# cat > /etc/docker/daemon.json << EOF{    "exec-opts": ["native.cgroupdriver=systemd"],    "registry-mirrors": [        "https://1nj0zren.mirror.aliyuncs.com",        "https://kfwkfulq.mirror.aliyuncs.com",        "https://2lqq34jg.mirror.aliyuncs.com",        "https://pee6w651.mirror.aliyuncs.com",        "http://hub-mirror.c.163.com",        "https://docker.mirrors.ustc.edu.cn",        "http://f1361db2.m.daocloud.io",        "https://registry.docker-cn.com"    ]}EOF[root@node1 ~]# systemctl restart docker[root@node1 ~]# docker info | grep "Cgroup Driver"

下载依赖镜像

[root@node1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2[root@node1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2[root@node1 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2[root@node1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0[root@node1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0[root@node1 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

3.4.8 部署kubelet

以下操作在master1上操作
创建kubelet-bootstrap.kubeconfig

[root@master1 work]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)设置集群参数[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kubelet-bootstrap.kubeconfig设置客户端认证参数[root@master1 work]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig设置上下文参数[root@master1 work]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig设置默认上下文[root@master1 work]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig创建角色绑定[root@master1 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

创建配置文件

[root@master1 work]#  vim kubelet.json{  "kind": "KubeletConfiguration",  "apiVersion": "kubelet.config.k8s.io/v1beta1",  "authentication": {    "x509": {      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"    },    "webhook": {      "enabled": true,      "cacheTTL": "2m0s"    },    "anonymous": {      "enabled": false    }  },  "authorization": {    "mode": "Webhook",    "webhook": {      "cacheAuthorizedTTL": "5m0s",      "cacheUnauthorizedTTL": "30s"    }  },  "address": "172.10.1.14",  "port": 10250,  "readOnlyPort": 10255,  "cgroupDriver": "cgroupfs",                     # 如果docker的驱动为systemd,处修改为systemd。此处设置很重要,否则后面node节点无法加入到集群  "hairpinMode": "promiscuous-bridge",  "serializeImagePulls": false,  "featureGates": {    "RotateKubeletClientCertificate": true,    "RotateKubeletServerCertificate": true  },  "clusterDomain": "cluster.local.",  "clusterDNS": ["10.255.0.2"]}

创建启动文件

[root@master1 work]# vim kubelet.service[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=docker.serviceRequires=docker.service[Service]WorkingDirectory=/var/lib/kubeletExecStart=/usr/local/bin/kubelet \  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \  --cert-dir=/etc/kubernetes/ssl \  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \  --config=/etc/kubernetes/kubelet.json \  --network-plugin=cni \  --pod-infra-container-image=k8s.gcr.io/pause:3.2 \  --alsologtostderr=true \  --logtostderr=false \  --log-dir=/var/log/kubernetes \  --v=2Restart=on-failureRestartSec=5[Install]WantedBy=multi-user.target

注:
–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像

同步相关文件到各个节点

[root@master1 work]# cp kubelet-bootstrap.kubeconfig /etc/kubernetes/[root@master1 work]# cp kubelet.json /etc/kubernetes/[root@master1 work]# cp kubelet.service /usr/lib/systemd/system/以上步骤,如果master节点不安装kubelet,则不用执行[root@master1 work]# for i in node1 node2 node3;do rsync -vaz kubelet-bootstrap.kubeconfig kubelet.json $i:/etc/kubernetes/;done[root@master1 work]# for i in node1 node2 node3;do rsync -vaz ca.pem $i:/etc/kubernetes/ssl/;done[root@master1 work]# for i in node1 node2 node3;do rsync -vaz kubelet.service $i:/usr/lib/systemd/system/;done

注:kubelete.json配置文件address改为各个节点的ip地址
启动服务
各个work节点上操作

[root@node1 ~]# mkdir /var/lib/kubelet[root@node1 ~]# mkdir /var/log/kubernetes[root@node1 ~]# systemctl daemon-reload[root@node1 ~]# systemctl enable kubelet[root@node1 ~]# systemctl start kubelet[root@node1 ~]# systemctl status kubelet

确认kubelet服务启动成功后,接着到master上Approve一下bootstrap请求。执行如下命令可以看到三个worker节点分别发送了三个 CSR 请求:

[root@master1 work]# kubectl get csr

[root@master1 work]# kubectl certificate approve node-csr-HlX3cExsZohWsu8Dd6Rp_ztFejmMdpzvti_qgxo4SAQ[root@master1 work]# kubectl certificate approve node-csr-oykYfnH_coRF2PLJH4fOHlGznOZUBPDg5BPZXDo2wgk[root@master1 work]# kubectl certificate approve node-csr-ytRB2fikhL6dykcekGg4BdD87o-zw9WPU44SZ1nFT50[root@master1 work]# kubectl get csr[root@master1 work]# kubectl get nodes

3.4.9 部署kube-proxy

创建csr请求文件

[root@master1 work]# vim kube-proxy-csr.json{  "CN": "system:kube-proxy",  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "Hubei",      "L": "Wuhan",      "O": "k8s",      "OU": "system"    }  ]}

生成证书

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy[root@master1 work]# ls kube-proxy*.pem

创建kubeconfig文件

[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube-proxy.kubeconfig[root@master1 work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig[root@master1 work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig[root@master1 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

创建kube-proxy配置文件

[root@master1 work]# vim kube-proxy.yamlapiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 172.10.1.14clientConnection:  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfigclusterCIDR: 192.168.0.0/16                           # 此处网段必须与网络组件网段保持一致,否则部署网络组件时会报错healthzBindAddress: 172.10.1.14:10256kind: KubeProxyConfigurationmetricsBindAddress: 172.10.1.14:10249mode: "ipvs"

创建服务启动文件

[root@master1 work]# vim kube-proxy.service[Unit]Description=Kubernetes Kube-Proxy ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]WorkingDirectory=/var/lib/kube-proxyExecStart=/usr/local/bin/kube-proxy \  --config=/etc/kubernetes/kube-proxy.yaml \  --alsologtostderr=true \  --logtostderr=false \  --log-dir=/var/log/kubernetes \  --v=2Restart=on-failureRestartSec=5LimitNOFILE=65536[Install]WantedBy=multi-user.target

同步文件到各个节点

[root@master1 work]# cp kube-proxy*.pem /etc/kubernetes/ssl/[root@master1 work]# cp kube-proxy.kubeconfig kube-proxy.yaml /etc/kubernetes/[root@master1 work]# cp kube-proxy.service /usr/lib/systemd/system/master节点不安装kube-proxy,则以上步骤不用执行[root@master1 work]# for i in node1 node2 node3;do rsync -vaz kube-proxy.kubeconfig kube-proxy.yaml $i:/etc/kubernetes/;done[root@master1 work]# for i in node1 node2 node3;do rsync -vaz kube-proxy.service $i:/usr/lib/systemd/system/;done

注:配置文件kube-proxy.yaml中address修改为各节点的实际IP
启动服务

[root@node1 ~]# mkdir -p /var/lib/kube-proxy[root@node1 ~]# systemctl daemon-reload[root@node1 ~]# systemctl enable kube-proxy[root@node1 ~]# systemctl restart kube-proxy[root@node1 ~]# systemctl status kube-proxy

3.4.10 配置网络组件

[root@master1 work]# wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml[root@master1 work]# kubectl apply -f calico.yaml 

此时再来查看各个节点,均为Ready状态

[root@master1 work]# kubectl get pods -A[root@master1 work]# kubectl get nodes

3.4.11 部署coredns

下载coredns yaml文件:https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
修改yaml文件:
kubernetes cluster.local in-addr.arpa ip6.arpa
forward . /etc/resolv.conf
clusterIP为:10.255.0.2(kubelet配置文件中的clusterDNS)

[root@master1 work]# cat coredns.yaml apiVersion: v1kind: ServiceAccountmetadata:  name: coredns  namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  labels:    kubernetes.io/bootstrapping: rbac-defaults  name: system:corednsrules:- apiGroups:  - ""  resources:  - endpoints  - services  - pods  - namespaces  verbs:  - list  - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  annotations:    rbac.authorization.kubernetes.io/autoupdate: "true"  labels:    kubernetes.io/bootstrapping: rbac-defaults  name: system:corednsroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: system:corednssubjects:- kind: ServiceAccount  name: coredns  namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata:  name: coredns  namespace: kube-systemdata:  Corefile: |    .:53 {        errors        health {          lameduck 5s        }        ready        kubernetes cluster.local  in-addr.arpa ip6.arpa {          fallthrough in-addr.arpa ip6.arpa        }        prometheus :9153        forward . /etc/resolv.conf {          max_concurrent 1000        }        cache 30        loop        reload        loadbalance    }---apiVersion: apps/v1kind: Deploymentmetadata:  name: coredns  namespace: kube-system  labels:    k8s-app: kube-dns    kubernetes.io/name: "CoreDNS"spec:  # replicas: not specified here:  # 1. Default is 1.  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.  strategy:    type: RollingUpdate    rollingUpdate:      maxUnavailable: 1  selector:    matchLabels:      k8s-app: kube-dns  template:    metadata:      labels:        k8s-app: kube-dns    spec:      priorityClassName: system-cluster-critical      serviceAccountName: coredns      tolerations:        - key: "CriticalAddonsOnly"          operator: "Exists"      nodeSelector:        kubernetes.io/os: linux      affinity:         podAntiAffinity:           preferredDuringSchedulingIgnoredDuringExecution:           - weight: 100             podAffinityTerm:               labelSelector:                 matchExpressions:                   - key: k8s-app                     operator: In                     values: ["kube-dns"]               topologyKey: kubernetes.io/hostname      containers:      - name: coredns        image: coredns/coredns:1.8.0        imagePullPolicy: IfNotPresent        resources:          limits:            memory: 170Mi          requests:            cpu: 100m            memory: 70Mi        args: [ "-conf", "/etc/coredns/Corefile" ]        volumeMounts:        - name: config-volume          mountPath: /etc/coredns          readOnly: true        ports:        - containerPort: 53          name: dns          protocol: UDP        - containerPort: 53          name: dns-tcp          protocol: TCP        - containerPort: 9153          name: metrics          protocol: TCP        securityContext:          allowPrivilegeEscalation: false          capabilities:            add:            - NET_BIND_SERVICE            drop:            - all          readOnlyRootFilesystem: true        livenessProbe:          httpGet:            path: /health            port: 8080            scheme: HTTP          initialDelaySeconds: 60          timeoutSeconds: 5          successThreshold: 1          failureThreshold: 5        readinessProbe:          httpGet:            path: /ready            port: 8181            scheme: HTTP      dnsPolicy: Default      volumes:        - name: config-volume          configMap:            name: coredns            items:            - key: Corefile              path: Corefile---apiVersion: v1kind: Servicemetadata:  name: kube-dns  namespace: kube-system  annotations:    prometheus.io/port: "9153"    prometheus.io/scrape: "true"  labels:    k8s-app: kube-dns    kubernetes.io/cluster-service: "true"    kubernetes.io/name: "CoreDNS"spec:  selector:    k8s-app: kube-dns  clusterIP: 10.255.0.2  ports:  - name: dns    port: 53    protocol: UDP  - name: dns-tcp    port: 53    protocol: TCP  - name: metrics    port: 9153    protocol: TCP[root@master1 work]# kubectl apply -f coredns.yaml 

3.5 验证

3.5.1 部署nginx

[root@master1 ~]# vim nginx.yaml ---apiVersion: v1kind: ReplicationControllermetadata:  name: nginx-controllerspec:  replicas: 2  selector:    name: nginx  template:    metadata:      labels:        name: nginx    spec:      containers:        - name: nginx          image: nginx:1.19.6          ports:            - containerPort: 80---apiVersion: v1kind: Servicemetadata:  name: nginx-service-nodeportspec:  ports:    - port: 80      targetPort: 80      nodePort: 30001      protocol: TCP  type: NodePort  selector:    name: nginx[root@master1 ~]# kubectl apply -f nginx.yaml[root@master1 ~]# kubectl get svc[root@master1 ~]# kubectl get pods

3.5.2 验证

ping验证nginx service

访问nginx

©著作权归作者所有:来自51CTO博客作者Rainbowhhy的原创作品,如需转载,请注明出处,否则将追究法律责任

你的鼓励让我更有动力

赞赏

0人进行了赞赏支持

更多相关文章

  1. widnows 下如何使用 ping加时间戳,来ping探测多个域名并生成文件
  2. 项目里文件名永远不要用中文!永远不要!
  3. 内外网文件单向传输服务器搭建 samba+rsync+inotify
  4. Linux学习之常用的Linux文件内容查看命令!
  5. 手把手教你搭建一个 Elasticsearch 集群
  6. 用Python清除文件夹中的重复视频
  7. Java Web: 发送请求,CSS文件 和 JS文件引用失败
  8. 附实战代码|告别OS模块,体验Python文件操作新姿势!
  9. 让Python在后台自动解压各种压缩文件!

随机推荐

  1. 布局中文件中【控件间距参数详解以及单位
  2. MediaRecorder视频的录制和播放
  3. Android(安卓)+ eclipse +ADT安装完全教
  4. Android开发者e周报 第1期
  5. ubuntu 9.04上下载android源码
  6. Android(安卓)Broadcast机制深入解析
  7. Android多进程
  8. SurfaceView
  9. Android(安卓)缓存框架 ASimpleCache
  10. android调用第三方软件打开下载的附件