前言
Nginx 官方的Ingress比K8S社区版本自带的Ingress在可配置性、与原生官方Nginx的团队的支持更加紧密,在容器化浪潮的今天,不失为一种选择,当然也有其它的方案比如Traefix,APISIX等;就看大家怎样取舍。
Tip:本文章摘自“DevOps 社区线上公益课容器化应用的动态发布实验手册”
NGINX KIC 架构
NGINX KIC安装指南
本实验提供了一个单节点的K8S集群,Master和Worker Node都在一台Ubuntu服务器上,相关系统版本如下:
系统版本:Ubuntu 18.04.4 LTS
K8S版本:v1.25.0
NGINX KIC版本:2.3.0
CNI: Flannel
ssh登录服务器后,我们首先确认一下K8S的状态。
root@edu_vlovev_cn:/root# kubectl get node NAME STATUS ROLES AGE VERSION ubuntu Ready control-plane 153m v1.25.0
然后我们开始部署NGINX K8S Ingress Controller,SA、RBAC等前置工作均已做好,只需要完成KIC的Deployment即可。进入/root/kic-oss-lab/0-deployment文件夹,创建nginx-ingress-hostnetwork.yaml文件。
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress namespace: nginx-ingress spec: replicas: 1 selector: matchLabels: app: nginx-ingress template: metadata: labels: app: nginx-ingress #annotations: #prometheus.io/scrape: "true" #prometheus.io/port: "9113" #prometheus.io/scheme: http spec: serviceAccountName: nginx-ingress hostNetwork: true containers: - image: nginx/nginx-ingress:2.3.0 imagePullPolicy: IfNotPresent name: nginx-ingress ports: - name: http containerPort: 80 - name: https containerPort: 443 - name: tcp-8001 containerPort: 8001 - name: tcp-8002 containerPort: 8002 - name: readiness-port containerPort: 8081 - name: prometheus containerPort: 9113 readinessProbe: httpGet: path: /nginx-ready port: readiness-port periodSeconds: 1 resources: requests: cpu: "100m" memory: "128Mi" #limits: # cpu: "1" # memory: "1Gi" securityContext: allowPrivilegeEscalation: true runAsUser: 101 #nginx runAsNonRoot: true capabilities: drop: - ALL add: - NET_BIND_SERVICE env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret #- -enable-cert-manager #- -enable-external-dns #- -v=3 # Enables extensive logging. Useful for troubleshooting. #- -report-ingress-status #- -external-service=nginx-ingress #- -enable-prometheus-metrics - -global-configuration=$(POD_NAMESPACE)/nginx-configuration
应用以上文件, 并确认pod已正常运行。
root@edu_vlovev_cn:/root/kic-oss-lab/0-deployment# kubectl apply -f nginx-ingress-hostnetwork.yaml deployment.apps/nginx-ingress created root@edu_vlovev_cn:/root# kubectl get pod -n nginx-ingress NAME READY STATUS RESTARTS AGE nginx-ingress-59dfd5855c-nknnr 1/1 Running 0 4m9s
使用curl命令访问本机的80和443端口,如果看到NGINX的页面,即表示KIC部署成功,可以继续下面的实验。root@edu_vlovev_cn:/root# curl 127.0.0.1
root@edu_vlovev_cn:/root# curl 127.0.0.1
404 Not Found
404 Not Found
nginx/1.23.0
root@ubuntu:/root# curl -k https://127.0.0.1
404 Not Found
404 Not Found
nginx/1.23.0
2.Ingress资源配置
进入/root/kic-oss-lab/1-ingress文件夹,依次应用以下文件,创建应用和TLS证书密钥。
root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/1-ingress # kubectl apply -f cafe.yaml -f cafe-secret.yaml deployment.apps/coffee created service/coffee-svc created deployment.apps/tea created service/tea-svc created secret/cafe-secret created root@k8s-calico-master:~/kic-oss-lab/1-ingress #
可以使用curl命令验证后端应用。
root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/1-ingress # kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE coffee-svc ClusterIP 10.98.91.20
80/TCP 78s kubernetes ClusterIP 10.96.0.1
443/TCP 104d tea-svc ClusterIP 10.96.160.45
80/TCP 78s root@k8s-calico-master:~/kic-oss-lab/1-ingress # curl 10.98.91.20 Server address: 10.244.195.105:80 Server name: coffee-87cf76b96-7c2wx Date: 22/Aug/2022:06:58:55 +0000 URI: / Request ID: 6cb10905d97c76ed137622a00f45c07b root@k8s-calico-master:~/kic-oss-lab/1-ingress # curl 10.96.160.45 Server address: 10.244.195.107:80 Server name: tea-7b475f7bcb-sgtmm Date: 22/Aug/2022:06:59:28 +0000 URI: / Request ID: 05bae1c95d02a9d000ef9ce58aad255d
创建cafe-ingress.yaml文件。
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: cafe-ingress spec: tls: - hosts: - cafe.example.com secretName: cafe-secret rules: - host: cafe.example.com http: paths: - path: /tea pathType: Prefix backend: service: name: tea-svc port: number: 80 - path: /coffee pathType: Prefix backend: service: name: coffee-svc port: number: 80
应用以上文件。
root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/1-ingress # kubectl apply -f cafe-ingress.yaml ingress.networking.k8s.io/cafe-ingress created root@k8s-calico-master:~/kic-oss-lab/1-ingress # kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE cafe-ingress nginx cafe.example.com 80, 443 11s
使用curl命令访问应用,hosts文件已经编辑好,可以直接访问域名。如果访问http,会被自动跳转至https,也可以直接访问https。
root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/1-ingress # curl -i http://cafe.example.com/tea HTTP/1.1 301 Moved Permanently Server: nginx/1.23.0 Date: Mon, 22 Aug 2022 07:11:45 GMT Content-Type: text/html Content-Length: 169 Connection: keep-alive Location: https://cafe.example.com:443/tea
301 Moved Permanently
301 Moved Permanently
nginx/1.23.0
root@k8s-calico-master:~/kic-oss-lab/1-ingress # curl -L -k http://cafe.example.com/tea Server address: 10.244.195.107:80 Server name: tea-7b475f7bcb-sgtmm Date: 22/Aug/2022:07:12:01 +0000 URI: /tea Request ID: bd6f0a77d948e668d5aca32bc4426584 root@k8s-calico-master:~/kic-oss-lab/1-ingress # curl -k https://cafe.example.com/coffee Server address: 10.244.195.105:80 Server name: coffee-87cf76b96-7c2wx Date: 22/Aug/2022:07:12:18 +0000 URI: /coffee Request ID: e3f2e955672a37fca5268cbf5f526960
多次访问同一URL,通过Server name可以看到在两个pod之间做负载均衡。
修改以上Ingress文件,加入一个annotation,然后kubectl apply。
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: cafe-ingress annotations: nginx.org/rewrites: "serviceName=tea-svc rewrite=/;serviceName=coffee-svc rewrite=/beans" spec: tls: - hosts: - cafe.example.com secretName: cafe-secret rules: - host: cafe.example.com http: paths: - path: /tea pathType: Prefix backend: service: name: tea-svc port: number: 80 - path: /coffee pathType: Prefix backend: service: name: coffee-svc port: number: 80
再次用curl命令访问,可以看到uri被改写了。
root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/1-ingress # curl -k https://cafe.example.com/tea Server address: 10.244.195.106:80 Server name: tea-7b475f7bcb-sql44 Date: 22/Aug/2022:07:19:06 +0000 URI: / Request ID: 091059a71d4c4ddfd36dbda30747bba5 root@k8s-calico-master:~/kic-oss-lab/1-ingress # curl -k https://cafe.example.com/coffee Server address: 10.244.195.104:80 Server name: coffee-87cf76b96-kb9hz Date: 22/Aug/2022:07:19:12 +0000 URI: /beans Request ID: 0c1755a924c89d213156727a3ab641df
删除所有资源,还原实验环境。
3.Virtual Server和Virtual Server Route资源配置
进入/root/kic-oss-lab/2-vs文件夹,应用文件夹内的所有文件,创建Namespaces和不同NS下的应用。
root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/2-vs # kubectl apply -f namespaces.yaml -f coffee.yaml -f tea.yaml -f cafe-secret.yaml namespace/cafe created namespace/tea created namespace/coffee created deployment.apps/coffee created service/coffee-svc created deployment.apps/tea-v1 created service/tea-v1-svc created deployment.apps/tea-v2 created service/tea-v2-svc created secret/cafe-secret created
依次创建和应用VirtualServer和两个VirtualServerRoute资源,首先是cafe-virtual-server.yaml
apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: cafe namespace: cafe spec: host: cafe.example.com tls: secret: cafe-secret routes: - path: /tea route: tea/tea - path: /coffee route: coffee/coffee
然后是coffee-virtual-server-route.yaml
apiVersion: k8s.nginx.org/v1 kind: VirtualServerRoute metadata: name: coffee namespace: coffee spec: host: cafe.example.com upstreams: - name: coffee service: coffee-svc port: 80 subroutes: - path: /coffee action: proxy: upstream: coffee requestHeaders: pass: true set: - name: My-Header value: F5-Best responseHeaders: add: - name: My-Header value: ${http_user_agent} - name: IC-Nginx-Version value: ${nginx_version} always: true rewritePath: /coffee/rewrite
最后是tea-virtual-server-route.yaml
apiVersion: k8s.nginx.org/v1 kind: VirtualServerRoute metadata: name: tea namespace: tea spec: host: cafe.example.com upstreams: - name: tea-v1 service: tea-v1-svc port: 80 - name: tea-v2 service: tea-v2-svc port: 80 subroutes: - path: /tea matches: - conditions: - cookie: version value: v2 action: pass: tea-v2 action: pass: tea-v1
我们首先观察一下应用服务,可以看到coffee和tea服务是分别在不同的Namespace下的,而tea还分了v1和v2版本。
首先使用curl命令来访问coffee应用,除了可以正常访问应用外,可以看到我们通过KIC插入的Header,另外就是uri被改写了。
root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/2-vs # curl -k -i https://cafe.example.com/coffee HTTP/1.1 200 OK Server: nginx/1.23.0 Date: Mon, 22 Aug 2022 07:38:52 GMT Content-Type: text/plain Content-Length: 172 Connection: keep-alive Expires: Mon, 22 Aug 2022 07:38:51 GMT Cache-Control: no-cache My-Header: curl/7.58.0 IC-Nginx-Version: 1.23.0 Server address: 10.244.195.109:8080 Server name: coffee-7b9b4bbd99-zzc7v Date: 22/Aug/2022:07:38:52 +0000 URI: /coffee/rewrite Request ID: 330ac344e70abff73852bc00d6d563a2
接下来访问tea应用,可以看到如果我们带上version=v2的cookie,就会访问到v2版本的tea应用,而不带cookie或者cookie值不等于v2都会访问到v1版本的tea应用。
root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/2-vs # curl -k https://cafe.example.com/tea Server address: 10.244.195.108:8080 Server name: tea-v1-99bf9564c-lkc2g Date: 22/Aug/2022:07:41:40 +0000 URI: /tea Request ID: 4ce5951672e31777edd0ef325fed08c9 root@k8s-calico-master:~/kic-oss-lab/2-vs # curl -k --cookie "version=v2" https://cafe.example.com/tea Server address: 10.244.195.110:8080 Server name: tea-v2-6c9df94f67-2d7f7 Date: 22/Aug/2022:07:42:02 +0000 URI: /tea Request ID: b376f7bc50a0536e6f73df37ae7bd767
删除刚才应用的所有资源,还原实验环境。
4.TransportServer资源配置
进入/root/kic-oss-lab/3-ts文件夹,首先应用cafe.yaml创建后端应用。新建globalconfiguration-listener.yaml文件,为KIC创建新的Listener。
apiVersion: k8s.nginx.org/v1alpha1 kind: GlobalConfiguration metadata: name: nginx-configuration namespace: nginx-ingress spec: listeners: - name: tcp-8001 port: 8001 protocol: TCP - name: tcp-8002 port: 8002 protocol: TCP
应用以上文件后,再分别创建和应用两个TranportServer资源,首先是ts-coffee.yaml。
apiVersion: k8s.nginx.org/v1alpha1 kind: TransportServer metadata: name: coffee spec: listener: name: tcp-8001 protocol: TCP upstreams: - name: coffee service: coffee-svc port: 80 action: pass: coffee
然后是ts-tea.yaml
apiVersion: k8s.nginx.org/v1alpha1 kind: TransportServer metadata: name: tea spec: listener: name: tcp-8002 protocol: TCP upstreams: - name: tea service: tea-svc port: 80 action: pass: tea
以上资源都应用完成后,就可以通过IP:Port的方式访问tea和coffee应用了。
root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/3-ts # curl http://127.0.0.1:8001 Server address: 10.244.195.112:80 Server name: coffee-87cf76b96-9dnl8 Date: 22/Aug/2022:07:53:38 +0000 URI: / Request ID: 121cbef18ecabaeb2e2c15943d8745b2 root@k8s-calico-master:~/kic-oss-lab/3-ts # curl http://127.0.0.1:8002 Server address: 10.244.195.113:80 Server name: tea-7b475f7bcb-vnj5k Date: 22/Aug/2022:07:53:43 +0000 URI: / Request ID: abb07e6f0ffde9e053ba58fd06db44b2
删除相关资源配置,但是请注意globalconfiguration这个资源千万不要删除! 如果要删除Listener,可以应用一个空的globalconfiguration资源。
5.Policy资源
进入/root/kic-oss-lab/4-policy文件夹,先应用webapp.yaml,部署后端应用。然后创建并应用rate-limit.yaml文件。
apiVersion: k8s.nginx.org/v1 kind: Policy metadata: name: rate-limit-policy spec: rateLimit: rate: 100r/s burst: 50 noDelay: true key: ${binary_remote_addr} zoneSize: 10M rejectCode: 444
再创建和应用virtual-server.yaml文件,在VS中调用Policy,不过我们先把Policy部分注释。
apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: cafe spec: host: cafe.example.com #policies: #- name: rate-limit-policy upstreams: - name: webapp service: webapp-svc port: 80 routes: - path: / action: pass: webapp
在注释掉Policy的情况下,我们通过wrk进行一个简单的压测。
root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/4-policy # wrk -t2 -c10 -d10s http://cafe.example.com Running 10s test @ http://cafe.example.com 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 3.99ms 3.59ms 45.17ms 84.71% Req/Sec 1.52k 0.94k 2.91k 40.50% 30332 requests in 10.01s, 10.76MB read Requests/sec: 3031.01 Transfer/sec: 1.08MB
可以看到rps在3000左右(根据环境不同可能数值会有不同,但一定远大于100)。然后我们把VS中的注释删除,让Policy生效。
apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: cafe spec: host: cafe.example.com policies: - name: rate-limit-policy upstreams: - name: webapp service: webapp-svc port: 80 routes: - path: / action: pass: webapp
再次通过wrk用相同命令进行压测。
root@edu_vlovev_cn_k8s-calico-master:~/kic-oss-lab/4-policy # wrk -t2 -c10 -d10s http://cafe.example.com Running 10s test @ http://cafe.example.com 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 18.49ms 23.31ms 212.68ms 83.37% Req/Sec 52.36 31.83 343.00 86.43% 1045 requests in 10.00s, 379.63KB read Socket errors: connect 0, read 85782, write 0, timeout 0 Requests/sec: 104.48 Transfer/sec: 37.96KB
这次可以看到,rps被控制在了100左右。如果我们查看KIC的日志,可以看到rps超出阈值被响应444的日志。
127.0.0.1 - - [22/Aug/2022:09:09:28 +0000] "GET / HTTP/1.1" 444 0 "-" "-" "-" 127.0.0.1 - - [22/Aug/2022:09:09:28 +0000] "GET / HTTP/1.1" 444 0 "-" "-" "-" 2022/08/22 09:09:28 [error] 128#128: *117569 limiting requests, excess: 50.900 by zone "pol_rl_default_rate-limit-policy_default_cafe", client: 127.0.0.1, server: cafe.example.com, request: "GET / HTTP/1.1", host: "cafe.example.com" 127.0.0.1 - - [22/Aug/2022:09:09:28 +0000] "GET / HTTP/1.1" 444 0 "-" "-" "-" 2022/08/22 09:09:28 [error] 127#127: *117568 limiting requests, excess: 50.900 by zone "pol_rl_default_rate-limit-policy_default_cafe", client: 127.0.0.1, server: cafe.example.com, request: "GET / HTTP/1.1", host: "cafe.example.com" 127.0.0.1 - - [22/Aug/2022:09:09:28 +0000] "GET / HTTP/1.1" 444 0 "-" "-" "-" 2022/08/22 09:09:28 [error] 127#127: *117570 limiting requests, excess: 50.900 by zone "pol_rl_default_rate-limit-policy_default_cafe", client: 127.0.0.1, server: cafe.example.com, request: "GET / HTTP/1.1", host: "cafe.example.com" 127.0.0.1 - - [22/Aug/2022:09:09:28 +0000] "GET / HTTP/1.1" 444 0 "-" "-" "-" 2022/08/22 09:09:28 [error] 127#127: *117571 limiting requests, excess: 50.900 by zone "pol_rl_default_rate-limit-policy_default_cafe", client: 127.0.0.1, server: cafe.example.com, request: "GET / HTTP/1.1", host: "cafe.example.com"
删除所有资源,还原实验环境。
至此,本次活动中《容器化应用的动态发布》部分的实验就全部完成了!
特别声明:以上内容(如有图片或视频亦包括在内)为自媒体平台“网易号”用户上传并发布,本平台仅提供信息存储服务。
Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.