容器云平台No.3~kubernetes简单使用
容器云平台No.3~kubernetes简单使用
scofield 菜鸟运维杂谈
今天是是第三篇,接着上一篇继续
首先,通过kubectl可以看到,三个节点都正常运行
1[root@k8s-master001 ~]# kubectl get no 2NAME STATUS ROLES AGE VERSION3k8s-master001 Ready master 16h v1.19.04k8s-master002 Ready master 16h v1.19.05k8s-master003 Ready master 16h v1.19.0
现在来部署第一个服务,这里以nginx为例
1[root@k8s-master001 ~]# kubectl run nginx --image=nginx --port=802pod/nginx created
可以看到,我们在k8s集群上创建了一个nginx应用,然后我们通过如下命令查看状态,发现现在nginx状态为Pending
1[root@k8s-master001 ~]# kubectl get po2NAME READY STATUS RESTARTS AGE3nginx 0/1 Pending 0 7s
现在我们使用kubectl describe命令来查看更多信息
1[root@k8s-master001 ~]# kubectl describe po nginx 2Name: nginx 3Namespace: default 4Priority: 0 5Node: <none> 6Labels: run=nginx 7Annotations: <none> 8Status: Pending 9IP: 10IPs: <none>11Containers:12 nginx:13 Image: nginx14 Port: <none>15 Host Port: <none>16 Environment: <none>17 Mounts:18 /var/run/secrets/kubernetes.io/serviceaccount from default-token-6gd92 (ro)19Conditions:20 Type Status21 PodScheduled False 22Volumes:23 default-token-6gd92:24 Type: Secret (a volume populated by a Secret)25 SecretName: default-token-6gd9226 Optional: false27QoS Class: BestEffort28Node-Selectors: <none>29Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s30 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s31Events:32 Type Reason Age From Message33 ---- ------ ---- ---- -------34 Warning FailedScheduling 15s 0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.35 Warning FailedScheduling 14s 0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
从输出信息可以看到最隐藏后两个事件3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
什么意思呢?什么意思呢?
这个提示表示,没有节点可以接受调度。
为什么会这样呢?
因为我们现在三个节点都是master节点,还没有添加node节点。默认情况下,master打了污点(taint,后续再介绍),master节点不接收调度。
由于我们这里是测试环境,没有多余的机器来作为node节点,可以手动删除master的污点,让master节点接收调度。
通过如下命令可以查看taint信息:
1[root@k8s-master001 ~]# kubectl get no -o yaml | grep taint -A 5 2 f:taints: {} 3 manager: kube-controller-manager 4 operation: Update 5 time: "2020-09-10T09:10:40Z" 6 - apiVersion: v1 7 fieldsType: FieldsV1 8-- 9 taints:10 - effect: NoSchedule11 key: node-role.kubernetes.io/master12 status:13 addresses:14 - address: 10.26.25.2015--16 f:taints: {}17 manager: kube-controller-manager18 operation: Update19 time: "2020-09-10T09:30:25Z"20 - apiVersion: v121 fieldsType: FieldsV122--23 taints:24 - effect: NoSchedule25 key: node-role.kubernetes.io/master26 status:27 addresses:28 - address: 10.26.25.2129--30 f:taints: {}31 manager: kube-controller-manager32 operation: Update33 time: "2020-09-10T09:35:43Z"34 - apiVersion: v135 fieldsType: FieldsV136--37 taints:38 - effect: NoSchedule39 key: node-role.kubernetes.io/master40 status:41 addresses:42 - address: 10.26.25.22
删除污点node-role.kubernetes.io/master
,如下所示
1[root@k8s-master001 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-2node/k8s-master001 untainted3node/k8s-master002 untainted4node/k8s-master003 untainted
然后现在在看nginx的状态,已经变为ContainerCreating,这表示已经分配到节点,开始创建nginx的pod了
1[root@k8s-master001 ~]# kubectl get po2NAME READY STATUS RESTARTS AGE3nginx 0/1 ContainerCreating 0 3m11s
使用kubectl get po -o wide查看,现在nginx已经正常运行了,而且可以看到,nginx现在被分配到 k8s-master001节点上,Pod IP是10.244.0.4
1[root@k8s-master001 ~]# kubectl get po -o wide2NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES3nginx 1/1 Running 0 56m 10.244.0.4 k8s-master001 <none> <none>
现在来访问nginx,熟悉的200出现了~~
1[root@k8s-master001 ~]# curl -I 10.244.0.4 2HTTP/1.1 200 OK 3Server: nginx/1.19.2 4Date: Fri, 11 Sep 2020 02:22:41 GMT 5Content-Type: text/html 6Content-Length: 612 7Last-Modified: Tue, 11 Aug 2020 14:50:35 GMT 8Connection: keep-alive 9ETag: "5f32b03b-264"10Accept-Ranges: bytes
但是如果在非集群节点上访问10.244.0.4,比如在自己的电脑上访问
1[~/b/] : curl -I 10.244.0.4
2curl: (55) getpeername() failed with errno 22: Invalid argument
现在就来解决这个问题,
1、我们删掉原来创建的nginx pod
1[root@k8s-master001 ~]# kubectl delete po nginx 2pod "nginx" deleted
2、创建nginx.yaml文件
1[root@k8s-master001 ~]# cat nginx.yaml 2--- 3apiVersion: apps/v1 4kind: StatefulSet 5metadata: 6 name: nginx 7 labels: 8 app: nginx 9spec:10 serviceName: nginx11 replicas: 112 selector:13 matchLabels:14 app: nginx15 template:16 metadata:17 labels:18 app: nginx19 spec:20 terminationGracePeriodSeconds: 18021 containers:22 - name: nginx23 image: nginx24 imagePullPolicy: Always25 ports:26 - containerPort: 8027 name: port28---29apiVersion: v130kind: Service31metadata:32 name: nginx33 labels:34 app: nginx35spec:36 type: NodePort37 ports:38 - port: 8039 targetPort: 8040 selector:41 app: nginx
3、执行kubectl apply -f nginx.yaml部署
1[root@k8s-master001 ~]# kubectl apply -f nginx.yaml 2statefulset.apps/nginx created 3service/nginx created 4 5[root@k8s-master001 ~]# kubectl get po,ep,svc 6NAME READY STATUS RESTARTS AGE 7pod/nginx-0 1/1 Running 0 24s 8 9NAME ENDPOINTS AGE10endpoints/kubernetes 10.26.25.20:6443,10.26.25.21:6443,10.26.25.22:6443 17h11endpoints/nginx 10.244.2.3:80 23s1213NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE14service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h15service/nginx NodePort 10.106.27.213 <none> 80:30774/TCP 23s
现在能看到,创建了一个名为nginx的service,把nginx默认的80端口映射到了30774
访问集群任意节点的IP:32650,这里是10.26.25.20:30774
4、在集群节点上访问:
1[root@k8s-master001 ~]# curl -I 10.26.25.20:30774 2HTTP/1.1 200 OK 3Server: nginx/1.19.2 4Date: Fri, 11 Sep 2020 02:53:55 GMT 5Content-Type: text/html 6Content-Length: 612 7Last-Modified: Tue, 11 Aug 2020 14:50:35 GMT 8Connection: keep-alive 9ETag: "5f32b03b-264"10Accept-Ranges: bytes
5、在笔记本电脑上访问:
1[~/b/wechatimages] : curl -I 10.26.25.20:30774 2HTTP/1.1 200 OK 3Server: nginx/1.19.2 4Date: Fri, 11 Sep 2020 02:54:24 GMT 5Content-Type: text/html 6Content-Length: 612 7Last-Modified: Tue, 11 Aug 2020 14:50:35 GMT 8Connection: keep-alive 9ETag: "5f32b03b-264"10Accept-Ranges: bytes
如果您还是¥@@#¥……%#@
可以先行了解kubernetes的pod,endpoint,service等概念。。。后续文章也会陆续讲到。。。
注:文中图片来源于网络,如有侵权,请联系我及时删除。
更多相关文章
- 浅析Kubernrtes服务类型(Service Types)
- Kubernetes集群管理容器实践(概念篇)
- Kubernetes客户端和管理界面大集合
- Kubernetes Scheduler浅析
- Kubeadm 部署高可用 K8S 集群
- Percona XtraDB Cluster之流量控制
- redis高可用集群架构总结
- KubeNode:阿里巴巴云原生容器基础设施运维实践
- redis 哨兵模式集群搭建