Ingress
Kubernetes中,Service资源和Pod资源的IP地址仅能用于集群网络内部的通信,所有的网络流量都无法穿透边界路由器(Edge Router)以实现集群内外通信。尽管可以为Service使用NodePort或LoadBalancer类型通过节点引入外部流量,但它依然是4 层流量转发,可用的负载均衡器也为传输层负载均衡机制。

Ingress是Kubernetes API的标准资源类型之一,它其实就是一组基于DNS名称(host)或URL路径把请求转发至指定的Service 资源的规则,用于将集群外部的请求流量转发至集群内部完成服务发布。然而,Ingress资源自身并不能进行“流量穿透”,它仅是 一组路由规则的集合,这些规则要想真正发挥作用还需要其他功能的辅助,如监听某套接字,然后根据这些规则的匹配机制路由请求 流量。这种能够为Ingress资源监听套接字并转发流量的组件称为Ingress控制器(Ingress Controller)。

说明:图片来源于网络。

Ingress控制器可以由任何具有反向代理(HTTP/HTTPS)功能的服务程序实现,如Nginx、Envoy、HAProxy、Vulcand和Traefik等。Ingress控制器自身也是运行于集群中的Pod资源对象,它与被代理的运行为Pod资源的应用运行于同一网络中,如上图中ingress-nginx 与pod1、pod3等的关系所示。

另一方面,使用Ingress资源进行流量分发时,Ingress控制器可基于某Ingress资源定义的规则将客户端的请求流量直接转发至与Service对应 的后端Pod资源之上,这种转发机制会绕过Service资源,从而省去了由kube-proxy实现的端口代理开销。如上图所示,Ingress规则需要由一个 Service资源对象辅助识别相关的所有Pod对象,但ingress-nginx控制器可经由site2.ikubernetes.io规则的定义直接将请求流量调度至pod3 或pod4,而无须经由Service对象API的再次转发。

部署准备
获取部署文件
[root@K8S-PROD-M1 ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/ingress-nginx-3.1.0/deploy/static/provider/baremetal/deploy.yaml ingress-nginx-3.1.0.yaml
获取部署镜像
根据YAML中的镜像版本,提前下载镜像:

docker pull vicren/ingress-nginx-controller:v0.35.0
docker tag vicren/ingress-nginx-controller:v0.35.0 harbor.cluster.local/library/ingress-nginx-controller:v0.35.0
docker push harbor.cluster.local/library/ingress-nginx-controller:v0.35.0
docker rmi vicren/ingress-nginx-controller:v0.35.0
修改部署文件
针对Service增加NodePort:

...
spec:
type: NodePort
ports:

  • name: http
    port: 80
    protocol: TCP
    targetPort: http
    nodePort: 30080 #指定NodePort端口
  • name: https
    port: 443
    protocol: TCP
    targetPort: https
    nodePort: 30443 #指定NodePort端口
    ...
    修改Deployment中部分配置:

...
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-3.0.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.35.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
replicas: 3 #设置副本数
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:

  • name: controller
    image: harbor.cluster.local/library/ingress-nginx-controller:v0.35.0 #更新镜像地址
    imagePullPolicy: IfNotPresent
    lifecycle:
    preStop:
    exec:
    command:
    • /wait-shutdown
      args:
      • /nginx-ingress-controller
      • --election-id=ingress-controller-leader
      • --ingress-class=nginx
      • --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      • --validating-webhook=:8443
      • --validating-webhook-certificate=/usr/local/certificates/cert
      • --validating-webhook-key=/usr/local/certificates/key
      • --default-backend-service=$(POD_NAMESPACE)/default-http-backend #配置Default Backend后,需要部署default backend,否则ingress-nginx-controller运行将进入CrashLoopBackOff状态
        ...
        执行部署
        部署命令
        kubectl apply -f ingress-nginx-3.1.0.yaml
        访问Nginx
        在ingress-nginx-controller正常运行以后,K8S每个节点上均监听30080和30443端口,访问Nginx WebUI:

https://192.168.191.32:30443/

获得404 Not Found异常。

可根据后面测试部分配置的NAT转发规则进行本地浏览器访问。

部署default backend
创建部署文件

apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
labels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:

  • name: default-http-backend

    Any image is permissible as long as:

      # 1. It serves a 404 page at /  # 2. It serves 200 on a /healthz endpoint  image: harbor.cluster.local/library/defaultbackend-amd64:1.5  livenessProbe:    httpGet:      path: /healthz      port: 8080      scheme: HTTP    initialDelaySeconds: 30    timeoutSeconds: 5  ports:    - containerPort: 8080  resources:    limits:      cpu: 10m      memory: 20Mi    requests:      cpu: 10m      memory: 20Mi

    apiVersion: v1
    kind: Service
    metadata:
    name: default-http-backend
    namespace: ingress-nginx
    labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
    spec:
    ports:

    • port: 80
      targetPort: 8080
      selector:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
      执行部署
      [root@K8S-PROD-M1 ingress]# kubectl apply -f default-backend.yml
      测试
    • NAT配置

iptables -t nat -A PREROUTING -m tcp -p tcp -d 192.168.191.32 --dport 30080 -j DNAT --to-destination 192.168.122.12:30080
iptables -t nat -A PREROUTING -m tcp -p tcp -d 192.168.191.32 --dport 30443 -j DNAT --to-destination 192.168.122.12:30443

  • 访问Web UI

https://192.168.191.32:30443/

获得default backend - 404。

测试Ingress
创建Tomcat测试页面
echo '<h1>Hello World!</h1>' > index.html
scp index.html root@K8S-PROD-W1:/tmp/
scp index.html root@K8S-PROD-W2:/tmp/
scp index.html root@K8S-PROD-W3:/tmp/
创建第一个SVC和POD
apiVersion: v1
kind: Service
metadata:
name: svc-demo-1
namespace: default
spec:
selector:
app: pod-demo-1
ports:

  • name: http
    port: 80
    targetPort: 80

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: deploy-demo-1
    namespace: default
    spec:
    replicas: 3
    selector:
    matchLabels:
    app: pod-demo-1
    template:
    metadata:
    labels:
    app: pod-demo-1
    spec:
    containers:

    • name: myapp
      image: harbor.cluster.local/library/myapp:v2
      ports:
      • name: httpd
        containerPort: 80
        [root@K8S-PROD-M1 ingress]# kubectl apply -f deploy-demo-1.yaml
        创建第二个SVC和POD
        apiVersion: v1
        kind: Service
        metadata:
        name: svc-demo-2
        namespace: default
        spec:
        selector:
        app: pod-demo-2
        ports:
  • name: httpd
    port: 8080
    targetPort: 8080

apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-demo-2
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: pod-demo-2
template:
metadata:
labels:
app: pod-demo-2
spec:
containers:

  • name: mytomcat
    image: harbor.cluster.local/library/tomcat:9
    ports:
    • name: httpd
      containerPort: 8080
      volumeMounts:
    • mountPath: "/usr/local/tomcat/webapps/ROOT/index.html"
      name: tomacat-volume
      readOnly: true
      volumes:
  • name: tomacat-volume
    hostPath:
    type: File
    path: /tmp/index.html
    [root@K8S-PROD-M1 ingress]# kubectl apply -f deploy-demo-2.yaml
    创建Ingress
    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
    name: ingress-demo
    namespace: default
    annotations:
    kubernetes.io/ingress.class: "nginx"
    spec:
    rules:
    • host: demo01.cluster.local
      http:
      paths:
  • path:
    backend:
    serviceName: svc-demo-1
    servicePort: 80
    • host: demo02.cluster.local
      http:
      paths:
  • path:
    backend:
    serviceName: svc-demo-2
    servicePort: 8080

    创建

    [root@K8S-PROD-M1 ingress]# kubectl apply -f ingress-demo.yaml

查看

[root@K8S-PROD-M1 ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-demo <none> demo01.cluster.local,demo02.cluster.local 192.168.122.22 80 14m
测试验证

  • 配置域名解析

测试主机中添加针对demo01.cluster.local,demo02.cluster.local的域名解析:

[root@server ~]# vi /etc/hosts

...
192.168.122.22 demo01.cluster.local
192.168.122.22 demo02.cluster.local

  • curl测试

[root@depot ~]# curl http://demo02.cluster.local:30080/
<h1>Hello World!</h1>
[root@depot ~]# curl http://demo02.cluster.local:30080/
<h1>Hello World!</h1>

  • Web UI测试

测试Ingess访问(Ingress Nginx Controller以NodePort:30080的方式暴露):http://demo01.cluster.local:30080/。

Ingesss启用TLS
准备自签证书
[root@K8S-PROD-M1 ingress]# openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout demo01.key -out demo01.crt -subj "/CN=demo01.cluster.local"
Generating a 2048 bit RSA private key
...................+++
...................................+++
writing new private key to 'demo01.key'

创建Secret资源
[root@K8S-PROD-M1 ingress]# kubectl create secret generic ingress-demo01-tls --from-file=demo01.crt --from-file=demo01.key -n default
secret/ingress-demo01-tls created
创建TLS Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-tls-demo
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:

  • hosts:
    • demo01.cluster.local
      secretName: ingress-demo01-tls
      rules:
  • host: demo03.cluster.local
    http:
    paths:
    • path:
      backend:
      serviceName: svc-demo-1
      servicePort: 80
      [root@K8S-PROD-M1 ingress]# kubectl apply -f ingress-tls-demo.yaml
      测试验证
      查看ingress:

[root@K8S-PROD-M1 ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-demo <none> demo01.cluster.local,demo02.cluster.local 192.168.122.22 80 27h
ingress-tls-demo <none> demo03.cluster.local 192.168.122.22 80, 443 7m15s
测试主机中添加针对demo03.cluster.local的域名解析:

[root@server ~]# vi /etc/hosts

...
192.168.122.22 demo03.cluster.local
测试Ingess访问(Ingress Nginx Controller以NodePort:30443的方式暴露):https://demo01.cluster.local:30443/。

©著作权归作者所有:来自51CTO博客作者mob604756e9d3bc的原创作品,如需转载,请注明出处,否则将追究法律责任

更多相关文章

  1. K8S v1.18.x 部署-Kubeadm方式-4:部署负载均衡
  2. 互联网测试校招系列1:赢在测试岗位
  3. CentOS 7部署OpenStack--准备基础环境
  4. [翻译]微服务设计模式 - 1. 单体应用模式
  5. 基础篇--ES集群部署
  6. CentOS 7部署OpenStack--部署Newtron(计算节点)
  7. CentOS 7部署OpenStack--创建第一台虚拟机
  8. CentOS 7部署OpenStack--部署Newtron
  9. Vivo:基于 Jenkins 的持续交付实践与演进

随机推荐

  1. android编程之在单线程模型中Message、Ha
  2. Android中控件的继承 通用行为和属性
  3. 最近,又有人在谈论Android的前景了!深入解
  4. android架构之美
  5. 19_利用android提供的HanziToPinyin工具
  6. cordova操作Android本地文件系统
  7. 两份安卓学习资料,我建议你看完
  8. 对Android初学者学习中的几点建议
  9. 最近Android挺火啊,都没有什么感想吗
  10. Android(安卓)抽屉效果的导航菜单实现