Ingress-nginx


Ingress-nginx

一、Ingress 是什么?

Ingress 是 Kubernetes 中一个资源对象,是提供给集群外部访问客户端访问集群服务的一个入口,Ingress 负责将外部的请求转发到集群内不同的 Service 上

在之前使用 Service 做 LB 时,都是基于 IP:PORT,而 Ingress 可以实现针对 域名 或 某个 URL 做 LB,例如:

  1. http://domian.com/api -> Ingress 转发路由到名为 api service
  2. http://domian.com/download -> Ingress 转发路由到名为 downloadservice
  3. http://domian.com/web -> Ingress 转发路由到名为 webservice

二、Ingress 组成

Ingress 关键部分:

  • Ingress Controller:通过 Ingress Controller 完成对 APIServer 的 /ingress 接口的监控,基于 Ingress 定义,生成 Nginx 所需的配置文件,执行 reload 命令,重载 Nginx 配置
  • Default Backend:当客户端访问的 url 不存在时,通过这个后端服务返回404应答,这个服务可以用任何服务实现,只需要能提供 404 错误应答功能,以及 /healthz 完成 kubelet 健康检查,另外,由于 IC 通过服务名称(ServerName)访问它,所以要保证 DNS 功能正常
  • Ingress 策略定义:所白了,就是定义那个 域名 或 路径,转发到那个后端(ServiceName + ServicePort)
  • IngressClass:用来绑定使用特定的 Ingress Controller 控制器
  • Service:通过关联 Service,让 Ingress Controller 控制器 发现 Pod Endpoints 地址,然后按照 Ingress path 定义规则,将请求转发过去

三、Ingress 资源

Ingess

上面提到过,Ingress 是一个资源对象,它的资源定义中主要是声明描述 特定的域名请求路径,该转发给那个 service、resource,它只是描述,具体实际负责对应工作的是 IngressController

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo-ingress
  annotations:
    # 注解,声明这是一条 nginx 重写配置 
    nginx.ingress.kubernetes.io/rewrite-targets: /
spec:
  # 声明所使用 Ingress Controller
  ingressClassName: external-lb
  rules:
  - http:
      paths:
      - path: /testpath
        # Exact 精确匹配 URL 路径,且区分大小写
        # Prefix 基于以 / 分隔的 URL 路径前缀匹配(区分大小写)
        pathType: Prefix
        backend: 
          # 将 /testpath 路径的请求 转发 到 名为 test 的 service 资源
          service:
            name: ingress-demo1-svc
            # 配置 service 端口号
            port:
              number: 80

IngressClass

从 1.18 起,Kubernetes 正式提供了一个 IngressClass 资源对象定义控制器,由于集群中有多个 Ingress 控制器,

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: external-lb
spec:
  # 声明所使用的控制器
  controller: nginx-ingress-internal-controller
  parameters:
    apiGroup: k8s.example.com
    kind: IngressParameters
    name: external-lb

IngressController

ingress-nginx

Ingress Controller 持续 watch 监控 Ingress、Service、Endpoints、Secret、ConfigMap 等资源对象的变化,自动生成 Nginx 最新状态的配置文件,执行 reload 命令,重载 Nginx 配置

四、Ingress 使用

0. dns 解析

这个 mini.yo-yo.fun 已经在阿里云配置 DNS 解析,并申请了免费的 SSL 证书,下面会使用 ingress-nginx 进行配置

测试访问

$ nslookup mini.yo-yo.fun
Server:		100.100.2.136
Address:	100.100.2.136#53

Non-authoritative answer:
Name:	mini.yo-yo.fun
Address: 39.104.80.87

1. 准备后端应用

在开始创建 ingress-controller 前,我们先创建一个 nginx 服务,模拟后端应用

# my-nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
  labels:
    app: my-nginx
spec:
  selector:
    matchLabels:
      app: my-nginx
  template:
    metadata:
      labels:
        app: my-nginx
    spec:
      containers:
        - name: my-nginx
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    app: my-nginx
spec:
  type: ClusterIP
  selector:
    app: my-nginx
  ports:
    - port: 80
      protocol: TCP
      # targetPort: 80
      name: http

创建资源,检查状态

$ kc apply -f my-nginx.yml 
deployment.apps/my-nginx created
service/my-nginx created

$ kc get pods
NAME    READY   STATUS    RESTARTS   AGE
my-nginx-744c4ff7d7-wh8kx   1/1     Running   0    13s

$ kc get svc -l app=my-nginx
NAME    TYPE    CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
my-nginx   ClusterIP   10.101.138.157   <none>    80/TCP    65s

测试通过 svc 访问 nginx pod

$ curl 10.101.138.157    
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

OK,用于测试的服务准备完毕

2. helm 安装

ingress-nginx 通常部署在边缘节点,用以向集群外暴露服务地址

使用 helm 安装 ingress-nginx

$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo"ingress-nginx" has been added to your repositories
$ helm repo list                                                        
NAME         	URL                                       
stable       	http://mirror.azure.cn/kubernetes/charts/ 
ingress-nginx	https://kubernetes.github.io/ingress-nginx
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!# helm show values ingress-nginx/ingress-nginx

3. value 配置

配置中主要是两部分组成

  • controller:声明工作负载类型、使用 hostNetwork(容器监听即主机监听)、开启 admissionWebhooks 转入控制检查、选择特定主机运行
  • defaultBackend:启用 默认后端,配置镜像地址

下载解压,这里可以看到版本是 4.6.0,相关的配置项可能在其他版本不一定有效

$ helm fetch ingress-nginx/ingress-nginx
$ tar xf ingress-nginx-4.6.0.tgz

查看 ingress-nginx/values.yaml ,按照个人需求修改,对应的 ingress-nginx/ci 目录中给了一些示例的配置

这边,我稍微修改了下,示例配置如下:

# deployment-prod.yaml
controller:
  kind: Deployment
  name: controller
  image:
    repository: lotusching/ingress-nginx-controller
    tag: "v1.7.0"
    digest:

  dnsPolicy: ClusterFirstWithHostNet

  hostNetwork: true

  publishService:  # hostNetwork 模式下设置为 false,通过节点 IP 地址上报 ingress status 数据
    enabled: false

  # 是否需要处理不带 ingressClass 注解或者 ingressClassName 属性的 Ingress 对象
  # 设置为 true 会在控制器启动参数中新增一个 --watch-ingress-without-class 标注
  watchIngressWithoutClass: false


  tolerations:   # kubeadm 安装的集群默认情况下master是有污点,需要容忍这个污点才可以部署
  - key: "node-role.kubernetes.io/master"
    operator: "Equal"
    effect: "NoSchedule"

  nodeSelector:   # 固定到 k8s-master01 节点
    kubernetes.io/hostname: k8s-master01

  service:  # HostNetwork 模式不需要创建 service
    enabled: false

  admissionWebhooks: # 强烈建议开启 admission webhook
    enabled: true
    createSecretJob:
      resources:
        limits:
          cpu: 10m
          memory: 20Mi
        requests:
          cpu: 10m
          memory: 20Mi
    patchWebhookJob:
      resources:
        limits:
          cpu: 10m
          memory: 20Mi
        requests:
          cpu: 10m
          memory: 20Mi
    patch:
      enabled: true
      image:
        repository: lotusching/ingress-nginx-kube-webhook-certgen
        tag: v1.1.1
        digest:

defaultBackend:  # 配置默认后端
  enabled: true
  name: defaultbackend
  image:
    repository: lotusching/ingress-nginx-defaultbackend
    tag: "1.5"

创建 helm release

$ helm upgrade --install ingress-nginx ./ingress-nginx -f ./deployment-prod.yml --namespace ingress-nginx

Release "ingress-nginx" does not exist. Installing it now.
NAME: ingress-nginx
LAST DEPLOYED: Wed Apr 26 18:11:40 2023
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller'

# ingerss-controller 安装完成,它还给了我们一份 ingress 示例配置
An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

查看资源

$ kc get pods -n ingress-nginx -o wide
NAME                                            READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES
ingress-nginx-controller-59f5b986d8-8hmgd       1/1     Running   0          90s   172.16.0.2    k8s-master01   <none>           <none>
ingress-nginx-defaultbackend-7bd6dbc477-nhcnq   1/1     Running   0          90s   10.244.2.20   k8s-worker03   <none>           <none>

OK,ingress-nginx 控制器创建完毕,此时,我们 master1 的 80、443 端口已经监听了,因为我们配置的是 hostNetwork 模式,所以可以通过公网访问

$ curl 39.104.80.87    
default backend - 404   
$ curl mini.yo-yo.fun
default backend - 404   

可以看到,当我们未做任何配置时,此时访问会使用默认后端 defaultbackend 返回 404 信息,因为我们还没有配置 ingress 定义,此时 ingress-nginx 控制器不知道该将 mini.yo-yo.fun 域名给那个后端应用去处理

4. ingress-nginx 默认配置

我们先不急着去创建对应 ingress 资源定义策略,我们先去看下 ingress-nginx 控制器的配置文件

$ kc -n ingress-nginx exec -it ingress-nginx-controller-59f5b986d8-8hmgd  -- cat /etc/nginx/nginx.conf

# Configuration checksum: 12236047274793387912

# setup custom paths that do not require root access
pid /tmp/nginx/nginx.pid;

daemon off;

worker_processes 2;

worker_rlimit_nofile 64512;

worker_shutdown_timeout 240s ;

events {
	multi_accept        on;
	worker_connections  16384;
	use                 epoll;
	
}

http {
	# ...
	
	init_by_lua_block {
    # ...
 	}
	
	init_worker_by_lua_block {
    # ...
	}
	
	geoip_country       /etc/nginx/geoip/GeoIP.dat;
	geoip_city          /etc/nginx/geoip/GeoLiteCity.dat;
	geoip_org           /etc/nginx/geoip/GeoIPASNum.dat;
	geoip_proxy_recursive on;
	
	aio                 threads;
	aio_write           on;
	
	tcp_nopush          on;
	tcp_nodelay         on;
	
	log_subrequest      on;
	
	reset_timedout_connection on;
	
	keepalive_timeout  75s;
	keepalive_requests 1000;
	
	client_body_temp_path           /tmp/nginx/client-body;
	fastcgi_temp_path               /tmp/nginx/fastcgi-temp;
	proxy_temp_path                 /tmp/nginx/proxy-temp;
	ajp_temp_path                   /tmp/nginx/ajp-temp;
	
	client_header_buffer_size       1k;
	client_header_timeout           60s;
	large_client_header_buffers     4 8k;
	client_body_buffer_size         8k;
	client_body_timeout             60s;
	
	http2_max_field_size            4k;
	http2_max_header_size           16k;
	http2_max_requests              1000;
	http2_max_concurrent_streams    128;
	
	types_hash_max_size             2048;
	server_names_hash_max_size      1024;
	server_names_hash_bucket_size   32;
	map_hash_bucket_size            64;
	
	proxy_headers_hash_max_size     512;
	proxy_headers_hash_bucket_size  64;
	
	variables_hash_bucket_size      256;
	variables_hash_max_size         2048;
	
	underscores_in_headers          off;
	ignore_invalid_headers          on;
	
	limit_req_status                503;
	limit_conn_status               503;
	
	include /etc/nginx/mime.types;
	default_type text/html;
	
	# Custom headers for response
	
	server_tokens off;
	
	more_clear_headers Server;
	
	# disable warnings
	uninitialized_variable_warn off;
	
    
    # 设置日志信息格式
	# Additional available variables:
	# $namespace
	# $ingress_name
	# $service_name
	# $service_port
	log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';
	
	map $request_uri $loggable {
		default 1;
	}
	
	access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;
	error_log  /var/log/nginx/error.log notice;
    
    # 使用 coredns 作为 DNS 服务器
	resolver 10.96.0.10 valid=30s ipv6=off;
	
	# See https://www.nginx.com/blog/websocket-nginx
	map $http_upgrade $connection_upgrade {
		default          upgrade;
		# See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
		''               '';
	}
	
	# Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
	# If no such header is provided, it can provide a random value.
	map $http_x_request_id $req_id {
		default   $http_x_request_id;
		""        $request_id;
	}
	
	# Create a variable that contains the literal $ character.
	# This works because the geo module will not resolve variables.
	geo $literal_dollar {
		default "$";
	}
	
	server_name_in_redirect off;
	port_in_redirect        off;
	
	ssl_protocols TLSv1.2 TLSv1.3;
	
	ssl_early_data off;
	
	# turn on session caching to drastically improve performance
	
	ssl_session_cache shared:SSL:10m;
	ssl_session_timeout 10m;
	
	# allow configuring ssl session tickets
	ssl_session_tickets off;
	
	# slightly reduce the time-to-first-byte
	ssl_buffer_size 4k;
	
	# allow configuring custom ssl ciphers
	ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
	ssl_prefer_server_ciphers on;
	
	ssl_ecdh_curve auto;
	# 默认提供了一张 SSL 证书(不受浏览器信任的)
	# PEM sha: 25fd37397ea7fa1a2dda71c772e4e89f1046b12b
	ssl_certificate     /etc/ingress-controller/ssl/default-fake-certificate.pem;
	ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;
	
	proxy_ssl_session_reuse on;
	
	upstream upstream_balancer {
        # 当前版本,ingress-nginx 已经不会根据后端应用 ip 地址在 upstream 块中生成一条条的 server 地址信息
        # 而是有 lua 脚本动态处理
		### Attention!!!
		#
		# We no longer create "upstream" section for every backend.
		# Backends are handled dynamically using Lua. If you would like to debug
		# and see what backends ingress-nginx has in its memory you can
		# install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
		# Once you have the plugin you can use "kubectl ingress-nginx backends" command to
		# inspect current backends.
		#
		###
		
		server 0.0.0.1; # placeholder
		
		balancer_by_lua_block {
			balancer.balance()
		}
		
		keepalive 320;
		keepalive_time 1h;
		keepalive_timeout  60s;
		keepalive_requests 10000;
		
	}
	
	# Cache for internal auth checks
	proxy_cache_path /tmp/nginx/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off;
	
	# Global filters
	
    # 这里可以看到 当前没有任何 域名主机 配置
	## start server _
	server {
		server_name _ ;
		
        # 默认配置监听了 80、443
		listen 80 default_server reuseport backlog=4096 ;
		listen 443 default_server reuseport backlog=4096 ssl http2 ;
		
		set $proxy_upstream_name "-";
		ssl_reject_handshake off;
		ssl_certificate_by_lua_block {
			certificate.call()
		}
		
		location / {
			
			set $namespace      "";
			set $ingress_name   "";
			set $service_name   "";
			set $service_port   "";
			set $location_path  "";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = false,
					force_no_ssl_redirect = false,
					preserve_trailing_slash = false,
					use_port_in_redirects = false,
					global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				monitor.call()
				plugins.run()
			}
			
			access_log off;
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "upstream-default-backend";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       4k;
			proxy_buffers                           4 4k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			proxy_pass http://upstream_balancer;
			proxy_redirect                          off;
			
		}
		
		# health checks in cloud providers require the use of port 80
		location /healthz {
			access_log off;
			return 200;
		}
		
		# this is required to avoid error if nginx is being monitored
		# with an external software (like sysdig)
		location /nginx_status {
			allow 127.0.0.1;
			deny all;
			access_log off;
			stub_status on;
		}
		
	}
	## end server _
    
	# 当未设置 default backend 时,它会使用 8181 端口,直接返回 404 给哪些未匹配成功的 http 请求
	# backend for when default-backend-service is not configured or it does not have endpoints
	server {
		listen 8181 default_server reuseport backlog=4096;
		set $proxy_upstream_name "internal";
		access_log off;
		
		location / {
			return 404;
		}
	}
	
	# default server, used for NGINX healthcheck and access to nginx stats
	server {
		listen 127.0.0.1:10246;
		set $proxy_upstream_name "internal";
		
		keepalive_timeout 0;
		gzip off;
		
		access_log off;
		
		location /healthz {
			return 200;
		}
		
		location /is-dynamic-lb-initialized {
			content_by_lua_block {
				local configuration = require("configuration")
				local backend_data = configuration.get_backends_data()
				if not backend_data then
				ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
				return
				end
				ngx.say("OK")
				ngx.exit(ngx.HTTP_OK)
			}
		}
		
		location /nginx_status {
			stub_status on;
		}
		
		location /configuration {
			client_max_body_size                    21M;
			client_body_buffer_size                 21M;
			proxy_buffering                         off;
			
			content_by_lua_block 
    		# ...
			}
		}
		
		location / {
			content_by_lua_block {
				ngx.exit(ngx.HTTP_NOT_FOUND)
			}
		}
	}
}

stream {
    # ...
}

通过 ingress-nginx 默认的配置文件,我们重点关注 server 信息

  • http 段中配置了 3 个 server 块
    • 第一个 server 块监听 80、443 端口
    • 第二个 server 块监听 10246 用于状态信息获取及监控检查
    • 第三个 server 块监听 8181 用于当未设置 default backend 时,直接使用该 server 返回 404 给哪些未匹配成功的 http 请求

5. ingress 定义

大致过一遍 ingress-nginx 控制器配置后,准备创建 ingress 资源,声明我们的访问规则,以及应用到哪个 ingress 控制器,通过下面的命令可以列出 ingressclass

$ kc get ingressclass nginx    
NAME    CONTROLLER    PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>    60m

创建 secret 资源

$ kc create secret tls mini-yoyo-tls --cert=mini.yo-yo.fun.pem --key=mini.yo-yo.fun.key             
secret/mini-yoyo-tls created

我们创建 ingress 资源这里 通过 ingressClassName 声明使用 nginx 作为 控制器

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-nginx
  namespace: default
spec:
  # 使用 nginx 的 IngressClass(关联的 ingress-nginx 控制器)
  ingressClassName: nginx
  tls:
    - hosts: 
      - mini.yo-yo.fun
      secretName: mini-yoyo-tls
  rules:
    - host: mini.yo-yo.fun
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
             # 将所有请求发送到 my-nginx 服务的 80 端口
             # 声明 service 仅用以通过该 service 获取后端 Pod Endpoints 列表,直接转发到 Pod
             # 不会经过 service 转发一次,这样可以减少网络跳转,提高性能
              service:
                name: my-nginx
                port:
                  number: 80

创建 ingress 资源

$ kc apply -f ingress.yml

6. 检查确认

观察 ingress-nginx 控制器日志输出

$ kc logs -n ingress-nginx -f ingress-nginx-controller-59f5b986d8-8hmgd
# ..
# 这里报了个 error,提示找不到证书,使用默认证书,等下确认下
W0426 11:23:03.644124       7 controller.go:1372] Error getting SSL certificate "default/mini-yoyo-tls": local SSL certificate default/mini-yoyo-tls was not found. Using default certificate
# 使用准入控制检查 ingress 配置是否正确
I0426 11:23:03.694574       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.05s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:18.1kBs testedConfigurationSize:0.051}
# 完成配置检查,accepting 通过
I0426 11:23:03.694612       7 main.go:100] "successfully validated configuration, accepting" ingress="default/my-nginx"
# 添加 efault/mini-yoyo-tls 到本地存储
I0426 11:23:03.698846       7 backend_ssl.go:67] "Adding secret to local store" name="default/mini-yoyo-tls"
# 监听到 Ingress 资源对象事件,开始同步配置
I0426 11:23:03.699156       7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"my-nginx", UID:"64d771d9-db1f-4f45-a087-b0d9e48e0e5e", APIVersion:"networking.k8s.io/v1", ResourceVersion:"37545", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync

再检查下 ingress-nginx 中的 nginx server 配置,可以看到自动生了一个新的 server 块(虚拟主机)

## end server _

## start server mini.yo-yo.fun
server {
	server_name mini.yo-yo.fun ;
	listen 80  ;
	listen 443  ssl http2 ;
	set $proxy_upstream_name "-";
	ssl_certificate_by_lua_block {
		certificate.call()
	}
	
	location / {
		set $namespace      "default";
		set $ingress_name   "my-nginx";
		set $service_name   "my-nginx";
		set $service_port   "80";
		set $location_path  "/";
		set $global_rate_limit_exceeding n;
		
		rewrite_by_lua_block {
           # ...
		}

		
		header_filter_by_lua_block {
           # ...
		}
		
		body_filter_by_lua_block {
           # ...
		}
		
		log_by_lua_block {
           # ...
		}
		
		port_in_redirect off;
		set $balancer_ewma_score -1;
		set $proxy_upstream_name "default-my-nginx-80";
		set $proxy_host          $proxy_upstream_name;
		set $pass_access_scheme  $scheme;
		set $pass_server_port    $server_port;
		set $best_http_host      $http_host;
		set $pass_port           $pass_server_port;
		set $proxy_alternative_upstream_name "";
		client_max_body_size                    1m;
		proxy_set_header Host                   $best_http_host;
		# Pass the extracted client certificate to the backend
		# Allow websocket connections
		proxy_set_header                        Upgrade           $http_upgrade;
		proxy_set_header                        Connection        $connection_upgrade;
		proxy_set_header X-Request-ID           $req_id;
		proxy_set_header X-Real-IP              $remote_addr;
		proxy_set_header X-Forwarded-For        $remote_addr;
		proxy_set_header X-Forwarded-Host       $best_http_host;
		proxy_set_header X-Forwarded-Port       $pass_port;
		proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
		proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;
		proxy_set_header X-Scheme               $pass_access_scheme;
		
		# Pass the original X-Forwarded-For
		proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
		
		# mitigate HTTPoxy Vulnerability
		# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
		proxy_set_header Proxy                  "";
		
		# Custom headers to proxied server
		proxy_connect_timeout                   5s;
		proxy_send_timeout                      60s;
		proxy_read_timeout                      60s;
		proxy_buffering                         off;
		proxy_buffer_size                       4k;
		proxy_buffers                           4 4k;
		proxy_max_temp_file_size                1024m;
		proxy_request_buffering                 on;
		proxy_http_version                      1.1;
		proxy_cookie_domain                     off;
		proxy_cookie_path                       off;
		# In case of errors try the next upstream server before returning an error
		proxy_next_upstream                     error timeout;
		proxy_next_upstream_timeout             0;
		proxy_next_upstream_tries               3;
		proxy_pass http://upstream_balancer;
		proxy_redirect                          off
	}
}
## end server mini.yo-yo.fun
# backend for when default-backend-service is not configured or it does not have endpoints

我们测试访问下

# 这里返回 308 是因为 nginx.ingress.kubernetes.io/ssl-redirect 注解 的原因,后面会解释
$ curl -I http://mini.yo-yo.fun                   
HTTP/1.1 308 Permanent Redirect
Date: Wed, 26 Apr 2023 11:30:54 GMT
Content-Type: text/html
Content-Length: 164
Connection: keep-alive
Location: https://mini.yo-yo.fun

$ curl -I https://mini.yo-yo.fun
HTTP/1.1 200 OK
Date: Wed, 26 Apr 2023 11:30:29 GMT
Content-Type: text/html
Content-Length: 615
Connection: keep-alive
Last-Modified: Tue, 28 Dec 2021 15:28:38 GMT
ETag: "61cb2d26-267"
Accept-Ranges: bytes
Strict-Transport-Security: max-age=15724800; includeSubDomains

$ curl https://mini.yo-yo.fun
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
# ...
</html>

检查证书是否是我们配置的那张

$ curl -vvI https://mini.yo-yo.fun 2>&1 | awk 'BEGIN { cert=0 } /^\* SSL connection/ { cert=1 } /^\*/ { if (cert) print }' 
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* 	subject: CN=mini.yo-yo.fun
* 	start date: Jul 16 00:00:00 2022 GMT
* 	expire date: Jul 16 23:59:59 2023 GMT
* 	common name: mini.yo-yo.fun
* 	issuer: CN=Encryption Everywhere DV TLS CA - G1,OU=www.digicert.com,O=DigiCert Inc,C=US
* Connection #0 to host mini.yo-yo.fun left intact

OK 没问题~

为了加深理解,我们梳理下请求路径

五、Ingress 进阶

basic auth

给站点加一个基本的认证

$ htpasswd -c auth admin                                                                   
New password:   # admin
Re-type new password:   # admin
Adding password for user admin

基于密码文件床架 Secret 对象

$ kc create secret generic basic-auth --from-file=auth
secret/basic-auth created

配置 ingress 使用 secret

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-with-auth
  namespace: default
  annotations:
    # 认证类型
    nginx.ingress.kubernetes.io/auth-type: basic
    # secret 对象名称
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    # 认证提示信息
    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - admin'
spec:
  # 使用 nginx 的 IngressClass(关联的 ingress-nginx 控制器)
  ingressClassName: nginx
  tls:
    - hosts:
      - mini.yo-yo.fun
      secretName: mini-yoyo-tls
  rules:
    - host: mini.yo-yo.fun
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
             # 将所有请求发送到 my-nginx 服务的 80 端口
             # 声明 service 仅用以通过该 service 获取后端 Pod Endpoints 列表,直接转发到 Pod
             # 不会经过 service 转发一次,这样可以减少网络跳转,提高性能
              service:
                name: my-nginx
                port:
                  number: 80

创建资源

$ kc apply -f ingress-with-auth.yml                        
ingress.networking.k8s.io/ingress-with-auth created

测试访问

$ curl https://admin:admin@mini.yo-yo.fun
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }

url rewrite

我们可以将 ingress-nginx 当作网关使用,在这个网关做各种 rewrite 规则配置

注解 描述 值类型
nginx.ingress.kubernetes.io/rewrite-target 配置需要重定向的目标 URL string
nginx.ingress.kubernetes.io/ssl-redirect 声明必须使用 ssl 访问(配置了证书后默认开启),这样就是为什么前面使用 http 协议,它会返回 308 Permanent Redirect bool
nginx.ingress.kubernetes.io/force-ssl-redirect 强制必须使用 ssl 访问 bool
nginx.ingress.kubernetes.io/app-root 定义应用 root string
nginx.ingress.kubernetes.io/use-regex 使用正则表达式匹配路径 bool
nginx.ingress.kubnernetes.io/configuration-snippet 在 nginx.conf 插入配置片段

rewrite-target 注解

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: rewrite-ingress
  namespace: default
  annotations:
    # 引用下面 path 路径匹配的分组
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - mini.yo-yo.fun
      secretName: mini-yoyo-tls
  rules:
    - host: mini.yo-yo.fun
      http:
        paths:
          - path: /gateway(/|$)(.*)
            pathType: Prefix
            backend:
              service:
                name: my-nginx
                port:
                  number: 80

创建 ingress 资源

# 删除旧的定义
$ kc delete -f ingress-with-auth.yml                                   
ingress.networking.k8s.io "ingress-with-auth" deleted


$ kc apply -f ingress-rewrite.yml
ingress.networking.k8s.io/rewrite-ingress created

观察 ingress-nginx 控制器 日志

I0426 15:27:05.640392       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.051s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:21.9kBs testedConfigurationSize:0.052}
I0426 15:27:05.640439       7 main.go:100] "successfully validated configuration, accepting" ingress="default/rewrite-ingress"
I0426 15:27:05.645659       7 store.go:433] "Found valid IngressClass" ingress="default/rewrite-ingress" ingressclass="nginx"
I0426 15:27:05.648028       7 controller.go:189] "Configuration changes detected, backend reload required"
I0426 15:27:05.650840       7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"rewrite-ingress", UID:"35da2d7e-4b1e-4d2e-9335-7d549a626d03", APIVersion:"networking.k8s.io/v1", ResourceVersion:"5900", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0426 15:27:05.766806       7 controller.go:206] "Backend successfully reloaded"
I0426 15:27:05.767550       7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59f5b986d8-mbfj4", UID:"5eb7399a-3344-4a91-bff7-3b1dba4f7bbb", APIVersion:"v1", ResourceVersion:"2767", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration

生成配置成功

测试访问试下

$ curl https://mini.yo-yo.fun 
default backend - 404#                                                                                                                              
$ curl https://mini.yo-yo.fun/gateway 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
# ...

访问 https://mini.yo-yo.fun 提示 404,这个不难理解,这是因为我们 ingress 里没有定义 / 路径该怎么处理,所以走的 default backend

https://mini.yo-yo.fun/gateway 按照我们定义 rewrite 规则重写到 /

由于我们 /gateway 后面没有跟上其他后缀路径,所以访问的是后端 nginx pod 的根路径,如果加上其他的,而后端没有的话,自然也会报 404,但那个 404 是后端 pod 返回的,如下所示

$ curl https://mini.yo-yo.fun/gateway/123 
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>

对应后端 nginx pod 日志

10.244.0.0 - - [26/Apr/2023:15:33:33 +0000] "GET /123 HTTP/1.1" 404 153 "-" "curl/7.29.0" "47.115.121.119"

如何解决主域名 404 的问题呢?两个思路

  1. 配置 path /
  2. 添加注解

第一种方式比较简单,但是配置显得很冗余

paths:
  - path: /
    pathType: Prefix
    backend:
      service:
        name: my-nginx
        port:
          number: 80
  - path: /gateway(/|$)(.*)
    pathType: Prefix
    backend:
      service:
        name: my-nginx
        port:
          number: 80

使用 注解 就显得优雅多了

annotations:
  # 引用下面 path 路径匹配的分组
  nginx.ingress.kubernetes.io/app-root: /gateway/
  nginx.ingress.kubernetes.io/rewrite-target: /$2

更新资源,测试访问

$ curl https://mini.yo-yo.fun
<html>
<head><title>302 Found</title></head>
<body>
<center><h1>302 Found</h1></center>
<hr><center>nginx</center>
</body>
</html>

# 添加 -L 参数跟随跳转
$ curl -L https://mini.yo-yo.fun
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
# ...

有时我们希望自动在 url 添加 / 后缀,以便于适配某些后端应用如 Django,可以使用 nginx.ingress.kubernetes.io/configuration-snippet 注解插入一个小片段

nginx.ingress.kubernetes.io/rewrite-target: /$2
# 在 nginx 配置文件中插入小片段
nginx.ingress.kubernetes.io/configuration-snippet: |
  rewrite ^(/gateway)$ $1/ redirect;

更新资源,测试访问

$ curl -vvv -L https://mini.yo-yo.fun
# ...
< Location: https://mini.yo-yo.fun/gateway/
# ...
* Issue another request to this URL: 'https://mini.yo-yo.fun/gateway/'


$ ingress-demo  curl -vvv -L https://mini.yo-yo.fun/gateway
# ...
< Location: https://mini.yo-yo.fun/gateway/
# ...
* Issue another request to this URL: 'https://mini.yo-yo.fun/gateway/'

OK,符合预期~

查看下 ingress-nginx 配置

## start server mini.yo-yo.fun
server {
	server_name mini.yo-yo.fun ;
	
	listen 80  ;
	listen 443  ssl http2 ;
	
	set $proxy_upstream_name "-";
	
	ssl_certificate_by_lua_block {
		certificate.call()
	}
	
	if ($uri = /) {
		return 302 $scheme://$http_host/gateway/;
	}
	
	location ~* "^/gateway(/|$)(.*)" {
		
		set $namespace      "default";
		set $ingress_name   "rewrite-ingress";
		set $service_name   "my-nginx";
		set $service_port   "80";
		set $location_path  "/gateway(/|${literal_dollar})(.*)";
		set $global_rate_limit_exceeding n;
		
		rewrite_by_lua_block {
		# ...
		}
		header_filter_by_lua_block {
		# ...
		}
		
		body_filter_by_lua_block {
		# ...
		}
		
		log_by_lua_block {
		# ...
		}
		
		port_in_redirect off;
		
		set $balancer_ewma_score -1;
		set $proxy_upstream_name "default-my-nginx-80";
		set $proxy_host          $proxy_upstream_name;
		set $pass_access_scheme  $scheme;
		
		set $pass_server_port    $server_port;
		
		set $best_http_host      $http_host;
		set $pass_port           $pass_server_port;
		
		set $proxy_alternative_upstream_name "";
		
		client_max_body_size                    1m;
		
		proxy_set_header Host                   $best_http_host;
		
		# Pass the extracted client certificate to the backend
		
		# Allow websocket connections
		proxy_set_header                        Upgrade           $http_upgrade;
		
		proxy_set_header                        Connection        $connection_upgrade;
		
		proxy_set_header X-Request-ID           $req_id;
		proxy_set_header X-Real-IP              $remote_addr;
		
		proxy_set_header X-Forwarded-For        $remote_addr;
		
		proxy_set_header X-Forwarded-Host       $best_http_host;
		proxy_set_header X-Forwarded-Port       $pass_port;
		proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
		proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;
		
		proxy_set_header X-Scheme               $pass_access_scheme;
		
		# Pass the original X-Forwarded-For
		proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
		
		# mitigate HTTPoxy Vulnerability
		# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
		proxy_set_header Proxy                  "";
		
		# Custom headers to proxied server
		
		proxy_connect_timeout                   5s;
		proxy_send_timeout                      60s;
		proxy_read_timeout                      60s;
		
		proxy_buffering                         off;
		proxy_buffer_size                       4k;
		proxy_buffers                           4 4k;
		
		proxy_max_temp_file_size                1024m;
		
		proxy_request_buffering                 on;
		proxy_http_version                      1.1;
		
		proxy_cookie_domain                     off;
		proxy_cookie_path                       off;
		
		# In case of errors try the next upstream server before returning an error
		proxy_next_upstream                     error timeout;
		proxy_next_upstream_timeout             0;
		proxy_next_upstream_tries               3;
		
       	# 上面插入的小代码段
		rewrite ^(/gateway)$ $1/ redirect;
		
		rewrite "(?i)/gateway(/|$)(.*)" /$2 break;
		proxy_pass http://upstream_balancer;
		
		proxy_redirect                          off;
		
	}
	
	if ($uri = /) {
		return 302 $scheme://$http_host/gateway/;
	}
	
	location ~* "^/" {
		
		set $namespace      "default";
		set $ingress_name   "rewrite-ingress";
		set $service_name   "";
		set $service_port   "";
		set $location_path  "/";
		set $global_rate_limit_exceeding n;
		
		rewrite_by_lua_block {
		# ...
		}

		#access_by_lua_block {
		#}
		
		header_filter_by_lua_block {
		# ...
		}
		
		body_filter_by_lua_block {
		# ...
		}
		
		log_by_lua_block {
		# ...
		}
		
		port_in_redirect off;
		
		set $balancer_ewma_score -1;
		set $proxy_upstream_name "upstream-default-backend";
		set $proxy_host          $proxy_upstream_name;
		set $pass_access_scheme  $scheme;
		
		set $pass_server_port    $server_port;
		
		set $best_http_host      $http_host;
		set $pass_port           $pass_server_port;
		
		set $proxy_alternative_upstream_name "";
		
		client_max_body_size                    1m;
		
		proxy_set_header Host                   $best_http_host;
		
		# Pass the extracted client certificate to the backend
		
		# Allow websocket connections
		proxy_set_header                        Upgrade           $http_upgrade;
		
		proxy_set_header                        Connection        $connection_upgrade;
		
		proxy_set_header X-Request-ID           $req_id;
		proxy_set_header X-Real-IP              $remote_addr;
		
		proxy_set_header X-Forwarded-For        $remote_addr;
		
		proxy_set_header X-Forwarded-Host       $best_http_host;
		proxy_set_header X-Forwarded-Port       $pass_port;
		proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
		proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;
		
		proxy_set_header X-Scheme               $pass_access_scheme;
		
		# Pass the original X-Forwarded-For
		proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
		
		# mitigate HTTPoxy Vulnerability
		# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
		proxy_set_header Proxy                  "";
		
		# Custom headers to proxied server
		
		proxy_connect_timeout                   5s;
		proxy_send_timeout                      60s;
		proxy_read_timeout                      60s;
		
		proxy_buffering                         off;
		proxy_buffer_size                       4k;
		proxy_buffers                           4 4k;
		
		proxy_max_temp_file_size                1024m;
		
		proxy_request_buffering                 on;
		proxy_http_version                      1.1;
		
		proxy_cookie_domain                     off;
		proxy_cookie_path                       off;
		# In case of errors try the next upstream server before returning an error
		proxy_next_upstream                     error timeout;
		proxy_next_upstream_timeout             0;
		proxy_next_upstream_tries               3;
		rewrite ^(/gateway)$ $1/ redirect;
		rewrite "(?i)/" /$2 break;
		proxy_pass http://upstream_balancer
		proxy_redirect                          off;
	}
}

灰度发布

ingress-nginx 通过 annotation 注解的方式支持 canary 金丝雀蓝绿A/B 等发布策略,它提供了四个注解规则

注解 描述 适合场景
nginx.ingress.kubernetes.io/canary-by-header 根据 Request Header 值为 always 或 never,判断请求是否会被发送到 Canary 版本 灰度发布、A/B 测试
nginx.ingress.kubernetes.io/canary-by-header-value 根据 Request Header 的值,提示 Ingress 将请求路由到 Canary Ingress 中指定的服务,必须与上一个 canary-by-header 一起使用 灰度发布、A/B 测试
nginx.ingress.kubernetes.io/canary-weight 基于权重调度入向流量的百分比,一般配置在 canary ingress 上,canary-weight: "30" 代表将 30% 流量调度掉 service 关联的 endpoints 蓝绿部署
nginx.ingress.kubernetes.io/canary-by-cookie

1. production 版本资源

首先,部署一个能自省基本信息的工作负载

apiVersion: apps/v1
kind: Deployment
metadata:
  name: production
  labels:
    app: production
spec:
  selector:
    matchLabels:
      app: production
  template:
    metadata:
      labels:
        app: production
    spec:
      containers:
      - name: production
        image: cnych/echoserver
        ports:
        - containerPort: 8080
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
  name: production
  labels:
    app: production
spec:
  ports:
  - port: 80
    targetPort: 8080
    name: http
  selector:
    app: production

创建 production 版本的工作负载 及 production 版本服务

$ kc apply -f production-app.yml  
deployment.apps/production created
service/production created

定义 ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: production
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - mini.yo-yo.fun
      secretName: mini-yoyo-tls
  rules:
  - host: mini.yo-yo.fun
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: production
            port:
              number: 80

创建资源

$ kc apply -f canary-ingress-demo1.yml 
ingress.networking.k8s.io/production created

访问测试

$ curl https://mini.yo-yo.fun                


Hostname: production-856d5fb99-rnkjm

Pod Information:
	node name:	k8s-worker02
	pod name:	production-856d5fb99-rnkjm
	pod namespace:	default
	pod IP:	10.244.1.5

Server values:
	server_version=nginx: 1.13.3 - lua: 10008

Request Information:
	client_address=10.244.0.0
	method=GET
	real path=/
	query=
	request_version=1.1
	request_scheme=http
	request_uri=http://mini.yo-yo.fun:8080/

Request Headers:
	accept=*/*
	host=mini.yo-yo.fun
	user-agent=curl/7.29.0
	x-forwarded-for=47.115.121.119
	x-forwarded-host=mini.yo-yo.fun
	x-forwarded-port=443
	x-forwarded-proto=https
	x-forwarded-scheme=https
	x-real-ip=47.115.121.119
	x-request-id=74896dadfba59c06ad1dbb4f3b82a53c
	x-scheme=https

Request Body:
	-no body in request-

2. canary 版本资源

上面有了 production 版本的工作负载 及 production 版本服务

假如,此时我们要进行发版,这里要用金丝雀的策略,首先,我们创建一个 canary 版本的 工作负载 及 service

apiVersion: apps/v1
kind: Deployment
metadata:
  name: canary
  labels:
    app: canary
spec:
  selector:
    matchLabels:
      app: canary
  template:
    metadata:
      labels:
        app: canary
    spec:
      containers:
      - name: canary
        image: cnych/echoserver
        ports:
        - containerPort: 8080
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
  name: canary
  labels:
    app: canary
spec:
  ports:
  - port: 80
    targetPort: 8080
    name: http
  selector:
    app: canary

创建资源

$ kc apply -f canary-app.yml                                           
deployment.apps/canary created
service/canary created

$ kc get pods
NAME                         READY   STATUS              RESTARTS   AGE
canary-66cb497b7f-r2vjv      0/1     ContainerCreating   0          11s
my-nginx-744c4ff7d7-96qz5    1/1     Running             0          26m
production-856d5fb99-rnkjm   1/1     Running             0          10m

OK,多版本的后端应用准备完毕,接下来跳过 注解 设置灰度策略

3. Annotation 灰度策略

① 蓝绿部署:基于权重

此时,系统存在两个版本的业务应用:

  • :当前版本
  • 绿:更新版本

此时,我们通过在 ingress-nginx 入口方向设置两个版本的权重,让流量按照预定义的比例,流入后端应用

大体工作原理如下

例如,最初 蓝 当前版本 权重为 100、绿 更新版本 为 0,此时请求不会进入 绿版本,而当我们检查确认、绿版本的应用启动完毕无异常后,通过控制 ingress-nginx 入口的权重比例,调整 绿版本 比例从 1%、10%、25%、50%,再到将 绿版本 权重提高为 100,蓝版本 降低为 0,此时所有请求按照规则都会流向 绿 更新版本

Ingress 定义如下

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  # canary ingress
  name: canary
  annotations:
    # 开启灰度发布机制
    nginx.ingress.kubernetes.io/canary: "true"
    # 分配 30% 流量到 Canary 版本
    nginx.ingress.kubernetes.io/canary-weight: "30"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - mini.yo-yo.fun
      secretName: mini-yoyo-tls
  rules:
  - host: mini.yo-yo.fun
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            # 流量打到 canary
            name: canary
            port:
              number: 80

创建资源

$ kc apply -f canary-ingress-demo2-blue-green.yml 
ingress.networking.k8s.io/canary created

检查确认

$ kc get svc    
NAME    TYPE    CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
canary    ClusterIP   10.109.140.0    <none>    80/TCP    18m
kubernetes   ClusterIP   10.96.0.1    <none>    443/TCP   74m
my-nginx     ClusterIP   10.105.32.73    <none>    80/TCP    44m
production   ClusterIP   10.106.213.41   <none>    80/TCP    28m
$ kubectl get ingress
NAME    CLASS   HOSTS    ADDRESS    PORTS     AGE
canary    nginx   mini.yo-yo.fun   172.16.0.6   80, 443   2m17s
production   nginx   mini.yo-yo.fun   172.16.0.6   80, 443   4m19s
$ kc get pods -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP    NODE    NOMINATED NODE   READINESS GATES
canary-66cb497b7f-r2vjv    1/1     Running   0    18m   10.244.2.5   k8s-worker03   <none>    <none>
my-nginx-744c4ff7d7-96qz5    1/1     Running   0    44m   10.244.1.3   k8s-worker02   <none>    <none>
production-856d5fb99-rnkjm   1/1     Running   0    28m   10.244.1.5   k8s-worker02   <none>    <none>

使用 for 循环访问测试,

$ for i in `seq 1 10`; do curl -s https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm

一共发送了十个请求,其中有三个打到了 canary(蓝) 版本,七个打到了 production(绿)

此时再调整为 50%

annotations:
  # 开启灰度发布机制
  nginx.ingress.kubernetes.io/canary: "true"
  # 分配 30% 流量到 Canary 版本
  nginx.ingress.kubernetes.io/canary-weight: "50"

更新资源定义

$ kc apply -f canary-ingress-demo2-blue-green.yml
ingress.networking.k8s.io/canary configured

访问测试

$ for i in `seq 1 10`; do curl -s https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm

补充说明下,更新后可能不会立即生效,稍等片刻即可

② 灰度:基于请求头
canary-by-header

上面是基于预定的权重来分配流量在各版本的比例,除此外,还可以使用基于 http request header 进行流量调度,这里会用到 nginx.ingress.kubernetes.io/canary-by-header 注解,它的优先级是高于 canary-weight 的,当 canary-by-header 配置后 canary-weight 自动失效

annotations:
  # 开启灰度发布机制
  nginx.ingress.kubernetes.io/canary: "true"
  # 分配 30% 流量到 Canary 版本
  nginx.ingress.kubernetes.io/canary-weight: "50"
  # 基于 http request header 头中的 canary 信息,根据 always、never 选择流量如何调度
  nginx.ingress.kubernetes.io/canary-by-header: canary

更新 ingress 定义

$ kc apply -f canary-ingress-demo2-blue-green.yml                                  
ingress.networking.k8s.io/canary configured

将 canary 头设置为 always

$ for i in `seq 1 10`; do curl -s -H "canary: always" https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv

这个不难理解,所有流量 全部 打到 canary 版本

将 canary 头设置为 never

$ for i in `seq 1 10`; do curl -s -H "canary: never" https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm

不设置 canary 头 或 canary 非 always、never

$ for i in `seq 1 10`; do curl -s https://mini.yo-yo.fun | grep "Hostname"; done   
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv

$ for i in `seq 1 10`; do curl -s -H "canary: abc" https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv

在不设置 canary 请求头 或 canary 请求头的值 非 always/never 时,ingress-nginx 默认是使用 canary-weight 规则

canary-by-header-value

我们可以使用 canary-by-header-value 自定义 header 值,替换掉默认 always/never 关键字值

例如,我们想让上面的请求头带有 canary: abc 走 canary 版本

annotations:
  # 开启灰度发布机制
  nginx.ingress.kubernetes.io/canary: "true"
  # 分配 30% 流量到 Canary 版本
  nginx.ingress.kubernetes.io/canary-weight: "50"
  # 基于 http request header 头中的 canary 信息,根据 always、never 选择流量如何调度
  nginx.ingress.kubernetes.io/canary-by-header: canary
  # 自定义 canary 头的值,用以规则匹配
  nginx.ingress.kubernetes.io/canary-by-header-value: abc

更新 ingress 定义

$ kc apply -f canary-ingress-demo3-canary-grey.yml                
ingress.networking.k8s.io/canary configured

将 canary 头设置为 abc

$ for i in `seq 1 10`; do curl -s -H "canary: abc" https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv

可以看到,所有请求头中 canary 为 abc 全都调度到了 canary 版本

!!!需要特别注意,设置 canary-by-header-valuealways/never 都会失效,如下所示:

将 canary 头设置为 never

$ for i in `seq 1 10`; do curl -s -H "canary: never" https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm

将 canary 头设置为 always

$ for i in `seq 1 10`; do curl -s -H "canary: always" https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm

不设置 canary 头 或 canary 非 abc

$ for i in `seq 1 10`; do curl -s https://mini.yo-yo.fun | grep "Hostname"; done                 
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm

$ for i in `seq 1 10`; do curl -s -H "canary: edf" https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm

OK,现在理解 canary-by-header-value 的作用了吧~

相较于金丝雀(灰度)策略,A/B 策略倾向于针对某些区域(地区)、设备(手机型号)、用户群进行流量调度,A/B 测试更倾向于运营视角

基于 客户端 请求头中的 cookies 字段,如果 cookie 字段为 always,则调度到 canary 版本、如果为 never 则调度到 production

annotations:
  # 开启灰度发布机制
  nginx.ingress.kubernetes.io/canary: "true"
  # 分配 30% 流量到 Canary 版本
  nginx.ingress.kubernetes.io/canary-weight: "50"
  # 基于 cookie 属性,同样有 always、never 两种调度逻辑
  nginx.ingress.kubernetes.io/canary-by-cookie: "user_from_beijing"

更新策略定义

$ kc apply -f canary-ingress-demo4-ab.yml        
ingress.networking.k8s.io/canary configured

测试访问

$ for i in `seq 1 10`; do curl -s -b "user_from_beijing=always" https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv


$ for i in `seq 1 10`; do curl -s -b "user_from_beijing=never" https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm


$ for i in `seq 1 10`; do curl -s -b "user_from_beijing=blabla" https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: production-856d5fb99-rnkjm
Hostname: canary-66cb497b7f-r2vjv
Hostname: canary-66cb497b7f-r2vjv

当 自定义的 cookie 属性值 非 always/never 时,会退化使用 canary-weight 策略,而当 canary-weight 位设置时,默认全都走 production 环境

annotations:
  # 开启灰度发布机制
  nginx.ingress.kubernetes.io/canary: "true"
  # 分配 30% 流量到 Canary 版本
  # nginx.ingress.kubernetes.io/canary-weight: "50"
  # 基于 cookie 调度
  nginx.ingress.kubernetes.io/canary-by-cookie: "user_from_beijing"

测试访问

$ for i in `seq 1 10`; do curl -s -b "user_from_beijing=blabla" https://mini.yo-yo.fun | grep "Hostname"; done
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm
Hostname: production-856d5fb99-rnkjm

tcp/udp 服务代理

通常来说一个 tcp、udp 服务有对外访问需求,一般会用 NodePort,不会使用 ingress controller 对外暴露 tcp/udp 服务,但是在某些场景下,可能会用到

1. 创建 tcp 服务

首先,创建一个 redis 服务,用于模拟 tcp 服务

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: default
  labels:
    app: redis
spec:
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      volumes:
        - name: data
          emptyDir: {}
      containers:
        - name: redis
          image: redis
          imagePullPolicy: IfNotPresent
          args:
            - '--requirepass 123123'
          ports:
            - containerPort: 6379
          volumeMounts:
            - mountPath: /data
              name: data
---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: default
  labels:
    app: redis
spec:
  type: ClusterIP
  selector:
    app: redis
  ports:
    - port: 6379

创建资源

$ kc apply -f tcp-redis-service.yml      
deployment.apps/redis created
service/redis created

访问测试

$ redis-cli -h 10.244.2.11 -a 123123
10.244.2.11:6379> keys *
(empty list or set)

2. 定义端口配置

在 value 文件中定义 tcp 端口配置

# deployment-prod.yml

defaultBackend:  # 配置默认后端
  enabled: true
  name: defaultbackend
  image:
    repository: lotusching/ingress-nginx-defaultbackend
    tag: "1.5"

tcp:  # 配置 tcp 服务
  36379: "default/redis:6379"
  # 27017: "default/mongo:27017"

更新 ingress-nginx 当前的 release

$ helm upgrade --install ingress-nginx ./ingress-nginx -f ./deployment-prod.yml --namespace ingress-nginx
Release "ingress-nginx" has been upgraded. Happy Helming!

正常情况下,它会重建一个 ingress-nginx 控制器 pod,待到成功启动后,删除老 ingress-nginx 控制器

接下来,我们获取下 ingress-nginx 控制器的详细信息,看一下是否生成相关配置参数

$ kc -n ingress-nginx get deploy ingress-nginx-controller -o yaml

# ...
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/ingress-nginx-defaultbackend
        - --election-id=ingress-nginx-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        # 重点看这里,这里自动生成了一行引用 configMap 的参数配置
        - --tcp-services-configmap=$(POD_NAMESPACE)/ingress-nginx-tcp
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key

列出一下 ingress-nginx 空间的 ConfigMap 对象

$ kc get cm -n ingress-nginx
NAME                       DATA   AGE
ingress-nginx-controller   1      70m
ingress-nginx-tcp          2      55m
kube-root-ca.crt           1      128m

# 获取 ingress-nginx-tcp 详细配置信息
$ kc get cm ingress-nginx-tcp -o yaml -n ingress-nginx
apiVersion: v1
# 基本规则:"边缘节点外部端口": 命名空间/服务名称:服务端口
data:
  "36379": default/redis:6379
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: ingress-nginx
    meta.helm.sh/release-namespace: ingress-nginx
  creationTimestamp: "2023-04-28T02:15:15Z"
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.7.0
    helm.sh/chart: ingress-nginx-4.6.0
  name: ingress-nginx-tcp
  namespace: ingress-nginx
  resourceVersion: "9437"
  uid: e1b20b92-9dfc-4947-a96c-90300db74e76

OK,接下来,我们测试下,能否通过 ingress-nginx 访问到对应的 tcp 服务

首先,找到 ingress-nginx 控制器运行在哪台节点

$ kc get pods -n ingress-nginx -o wide  
NAME                                            READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES
ingress-nginx-controller-5d4ff74cfc-qqc5h       1/1     Running   0          65m   172.16.0.10   k8s-worker02   <none>           <none>
ingress-nginx-defaultbackend-7bd6dbc477-lw72k   1/1     Running   0          80m   10.244.1.23   k8s-worker03   <none>           <none>

测试对应的节点的公网 IP 端口连通性

$ nc -zv -w 1 39.104.28.102 36379
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 39.104.28.102:36379.
Ncat: 0 bytes sent, 0 bytes received in 0.06 seconds.

使用 对应 tcp 服务客户端进行访问

$ redis-cli -h 39.104.28.102 -p 36379 -a 123123
39.104.28.102:36379> keys *
(empty list or set)
39.104.28.102:36379> set key1 v1
OK
39.104.28.102:36379> get key1
"v1"

获取 ingress-nginx 控制器 nginx.conf 看下它是如何实现 tcp 端口转发的

stream {
	lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;";
	
	lua_shared_dict tcp_udp_configuration_data 5M;
	
	resolver 10.96.0.10 valid=30s ipv6=off;
	
	init_by_lua_block {
		# ...
	}
	
	init_worker_by_lua_block {
		tcp_udp_balancer.init_worker()
	}
	lua_add_variable $proxy_upstream_name;
	
	log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';
	
	access_log /var/log/nginx/access.log log_stream ;
	error_log  /var/log/nginx/error.log notice;
	
	upstream upstream_balancer {
		server 0.0.0.1:1234; # placeholder
		balancer_by_lua_block {
			tcp_udp_balancer.balance()
		}
	}
	
	server {
		listen 127.0.0.1:10247;
		access_log off;
		content_by_lua_block {
			tcp_udp_configuration.call()
		}
	}
	
	# TCP services
	server {
		preread_by_lua_block {
			ngx.var.proxy_upstream_name="tcp-default-redis-6379";
		}
		listen                  36379;
		proxy_timeout           600s;
		proxy_next_upstream     on;
		proxy_next_upstream_timeout 600s;
		proxy_next_upstream_tries   3;
		proxy_pass              upstream_balancer;
		
	}
	# UDP services
	# Stream Snippets
}

Nginx 服务配置

http 全局配置

通常来说,ingress-nginx 已经做了基本的服务优化,相关调优参数也已经默认配置好了,不过优化这件事向来是看场景的,业务场景不同,相关的配置参数就要有所调整,下面我们看一下,如何调整 ingress-nginx 中 nginx 服务的配置参数

默认情况下,ingress-nginx 使用一个 configmap 来保存配置项及参数指,如下所示:

$ kc get deploy ingress-nginx-controller -o yaml -n ingress-nginx         

apiVersion: apps/v1
kind: Deployment
metadata:
# ...
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/ingress-nginx-defaultbackend
        - --election-id=ingress-nginx-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        # ingress-nginx-controller configMap 对象中保存了全局配置
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        # ...

查看一下 ingress-nginx-controller 对象

$ kc get cm ingress-nginx-controller -o yaml -n ingress-nginx
apiVersion: v1
data:
  allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: ingress-nginx
    meta.helm.sh/release-namespace: ingress-nginx
  creationTimestamp: "2023-04-29T05:37:17Z"
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.7.0
    helm.sh/chart: ingress-nginx-4.6.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
  resourceVersion: "2492"
  uid: aca398a3-d292-405b-afd4-e7ccdba3ab7d

似乎没看到什么与 nginx 相关的配置参数

我们用 patch 试下

data:
  allow-snippet-annotations: "true"
  keep-alive: "75"
  keep-alive-requests: "1001"
  # 启用后端 keep-alive,连接复用,提高 QPS
  upstream-keepalive-connections: "10000"
  upstream-keepalive-requests: "100"
  upstream-keepalive-timeout: "60"
  disable-ipv6: "true"
  disable-ipv6-dns: "true"

执行更新前先看下默认的配置参数值

$ kc exec -it ingress-nginx-controller-596db7675-kb2sx -n ingress-nginx -- cat /etc/nginx/nginx.conf|grep keepalive 
	keepalive_timeout  75s;
	keepalive_requests 1000;
		# See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
		keepalive 320;
		keepalive_time 1h;
		keepalive_timeout  60s;
		keepalive_requests 10000;
		keepalive_timeout 0;

执行更新

$ kc -n ingress-nginx patch cm ingress-nginx-controller --patch-file ingress-nginx-controller-patch.yml
configmap/ingress-nginx-controller patched

ingress-nginx-controller 会自动发现 cm 更新事件,自动生成重载配置文件

$ kc logs -f ingress-nginx-controller-596db7675-kb2sx --tail 20 -n ingress-nginx
# ...
I0429 06:03:09.192982       7 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"aca398a3-d292-405b-afd4-e7ccdba3ab7d", APIVersion:"v1", ResourceVersion:"4991", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0429 06:03:09.198930       7 controller.go:189] "Configuration changes detected, backend reload required"
I0429 06:03:09.293314       7 controller.go:206] "Backend successfully reloaded"
I0429 06:03:09.294894       7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-596db7675-kb2sx", UID:"e1afee6b-5a1a-4ede-8a9b-622e7551562e", APIVersion:"v1", ResourceVersion:"2934", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration

获取当前 ingress-nginx 服务配置

$ kc exec -it ingress-nginx-controller-596db7675-kb2sx -n ingress-nginx -- cat /etc/nginx/nginx.conf|grep keepalive
	keepalive_timeout  60s;
	keepalive_requests 1001;
		# See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
		keepalive 10000;
		keepalive_time 1h;
		keepalive_timeout  60s;
		keepalive_requests 100;
		keepalive_timeout 0;

喏,http 块中的全局配置生效了,这种方式仅适合临时调整

由于我们使用 helm 安装 ingress-nginx 所以更推荐是在 value 文件中去配置,如下所示:

# deployment-prod.yaml
controller:
  kind: Deployment
  # ingress-nginx 控制器副本数
  replicaCount: 2
  # 必须有一个副本处于可用状态
  minAvailable: 1
  # ...
  # 配置 nginx 全局参数
  config:
    # 这里要将 nginx 参数下划线变为中短横线
    allow-snippet-annotations: "true"
    client-header-buffer-size: 32k  # 注意不是下划线
    client-max-body-size: 5m
    keep-alive: "66"
    keep-alive-requests: "1006"
    # 启用后端 keep-alive,连接复用,提高 QPS
    upstream-keepalive-connections: "10000"
    upstream-keepalive-requests: "100"
    upstream-keepalive-timeout: "60"
    disable-ipv6: "true"
    disable-ipv6-dns: "true"
    max-worker-connections: "65535"
    max-worker-open-files: "10240"
    # ...

更新 release

$ helm upgrade --install ingress-nginx ./ingress-nginx -f ./deployment-prod.yml --namespace ingress-nginx

等待片刻,检查配置

$ kc exec -it ingress-nginx-controller-596db7675-kb2sx -n ingress-nginx -- cat /etc/nginx/nginx.conf|grep keepalive
	keepalive_timeout  66s;
	keepalive_requests 1006;
		# See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
		keepalive 10000;
		keepalive_time 1h;
		keepalive_timeout  60s;
		keepalive_requests 100;
		keepalive_timeout  80s;
		keepalive_timeout 0;

server 主机配置

参考官方文档

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-nginx
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/server-snippet: |
            keepalive_timeout  80s;
            add_header test-header test-value;
  # ...

更新资源

$ ingress-demo  kc apply -f ingress.yml
ingress.networking.k8s.io/my-nginx configured

测试访问

$ curl -I https://mini.yo-yo.fun        
HTTP/1.1 200 OK
Date: Sat, 29 Apr 2023 06:55:35 GMT
Content-Type: text/html
Content-Length: 615
Connection: keep-alive
Last-Modified: Tue, 28 Dec 2021 15:28:38 GMT
ETag: "61cb2d26-267"
Accept-Ranges: bytes
Strict-Transport-Security: max-age=15724800; includeSubDomains
test-header: test-value

检查配置

## start server mini.yo-yo.fun
server {
	server_name mini.yo-yo.fun ;
	listen 80  ;
	listen 443  ssl http2 ;
	set $proxy_upstream_name "-";
	ssl_certificate_by_lua_block {
		certificate.call()
	}
	# Custom code snippet configured for host mini.yo-yo.fun
	keepalive_timeout  80s;
	add_header test-header test-value;
       # ...

内核参数配置

除了 ingress-nginx 服务本身的参数配置,在正式环境中还需要对内核参数进行调整,以便于更好的提供服务,我们可以通过 initContainers 进行配置

initContainers:
- command:
  - /bin/sh
  - -c
  - |
    mount -o remount rw /proc/sys
    sysctl -w net.core.somaxconn=65535  # 具体的配置视具体情况而定
    sysctl -w net.ipv4.tcp_tw_reuse=1
    sysctl -w net.ipv4.ip_local_port_range="1024 65535"
    sysctl -w fs.file-max=1048576
    sysctl -w fs.inotify.max_user_instances=16384
    sysctl -w fs.inotify.max_user_watches=524288
    sysctl -w fs.inotify.max_queued_events=16384
image: busybox
imagePullPolicy: IfNotPresent
name: init-sysctl
securityContext:
  capabilities:
    add:
    - SYS_ADMIN
    drop:
    - ALL

由于我们使用 helm 安装,在 values 文件中使用 extraInitContainers 进行设置

# deployment-prod.yaml
controller:
  # ...
  # 内核参数设置
  extraInitContainers:
    - name: init-sysctl
      image: busybox
      securityContext:
        capabilities:
          add:
          - SYS_ADMIN
          drop:
          - ALL
      command:
      - /bin/sh
      - -c
      - |
        mount -o remount rw /proc/sys
        sysctl -w net.core.somaxconn=65535  # socket监听的 backlog 上限
        sysctl -w net.ipv4.tcp_tw_reuse=1  # 开启重用,允许将 TIME-WAIT sockets 重新用于新的 TCP 连接
        sysctl -w net.ipv4.ip_local_port_range="42768 65535" # 出向报文随机端口范围,需留意端口冲突的问题
        sysctl -w fs.file-max=1048576  # 系统文件描述符上限
        sysctl -w fs.inotify.max_user_instances=16384 # 每个用户创建inotify实例上限
        sysctl -w fs.inotify.max_user_watches=524288  # 单进程可以监视的文件数量上限
        sysctl -w fs.inotify.max_queued_events=16384  # 队列中最大的事件数
  dnsPolicy: ClusterFirstWithHostNet

更新 release

$ helm upgrade --install ingress-nginx ./ingress-nginx -f ./deployment-prod.yml --namespace ingress-nginx

等待 ingress-nginx-controller pod 重建完毕后

$ kc get pods -n ingress-nginx -w
# ...
ingress-nginx-controller-596db7675-kb2sx        0/1     Terminating         0          109m
ingress-nginx-controller-596db7675-kb2sx        0/1     Terminating         0          109m
ingress-nginx-controller-596db7675-kb2sx        0/1     Terminating         0          109m
ingress-nginx-controller-544546bb6-8nz8n        0/1     Pending             0          12s
ingress-nginx-controller-544546bb6-8nz8n        0/1     Init:0/1            0          12s
ingress-nginx-controller-544546bb6-8nz8n        0/1     PodInitializing     0          30s
ingress-nginx-controller-544546bb6-8nz8n        0/1     Running             0          31s
ingress-nginx-controller-544546bb6-8nz8n        1/1     Running             0          42s

检查内核参数设置

$ kc exec -it ingress-nginx-controller-544546bb6-8nz8n -c controller -n ingress-nginx -- /bin/sh                                              
/etc/nginx $ cat /proc/sys/fs/file-max 
1048576

完整配置

values.yml

# deployment-prod.yaml
controller:
  kind: Deployment
  # ingress-nginx 控制器副本数
  replicaCount: 2
  # 必须有一个副本处于可用状态
  minAvailable: 1
  # 更新策略
  # 下面两条参数配置结合在一起的效果,Terminate 一个、Create 一个,等这个 Running 后,再 Terminate 下一个,Pending 一个,直到全部更新完毕
  updateStrategy:
    rollingUpdate:
      # 不允许创建超出数量限制的 pod 副本
      maxSurge: 0
      # 更新期间允许一个 pod 不可用
      maxUnavailable: 1
  name: controller
  image:
    repository: lotusching/ingress-nginx-controller
    tag: "v1.7.0"
    digest:
  config:
    # 这里要将 nginx 参数下划线变为中短横线
    allow-snippet-annotations: "true"
    client-header-buffer-size: 32k  # 注意不是下划线
    client-max-body-size: 5m
    keep-alive: "66"
    keep-alive-requests: "1006"
    # 启用后端 keep-alive,连接复用,提高 QPS
    upstream-keepalive-connections: "10000"
    upstream-keepalive-requests: "100"
    upstream-keepalive-timeout: "60"
    disable-ipv6: "true"
    disable-ipv6-dns: "true"
    max-worker-connections: "65535"
    max-worker-open-files: "10240"

  # 内核参数设置
  extraInitContainers:
    - name: init-sysctl
      image: busybox
      securityContext:
        capabilities:
          add:
          - SYS_ADMIN
          drop:
          - ALL
      command:
      - /bin/sh
      - -c
      - |
        mount -o remount rw /proc/sys
        sysctl -w net.core.somaxconn=65535  # socket监听的 backlog 上限
        sysctl -w net.ipv4.tcp_tw_reuse=1  # 开启重用,允许将 TIME-WAIT sockets 重新用于新的 TCP 连接
        sysctl -w net.ipv4.ip_local_port_range="42768 65535" # 出向报文随机端口范围,需留意端口冲突的问题
        sysctl -w fs.file-max=1048576  # 系统文件描述符上限
        sysctl -w fs.inotify.max_user_instances=16384 # 每个用户创建inotify实例上限
        sysctl -w fs.inotify.max_user_watches=524288  # 单进程可以监视的文件数量上限
        sysctl -w fs.inotify.max_queued_events=16384  # 队列中最大的事件数

  dnsPolicy: ClusterFirstWithHostNet

  hostNetwork: true



  publishService:  # hostNetwork 模式下设置为 false,通过节点 IP 地址上报 ingress status 数据
    enabled: false

  # 是否需要处理不带 ingressClass 注解或者 ingressClassName 属性的 Ingress 对象
  # 设置为 true 会在控制器启动参数中新增一个 --watch-ingress-without-class 标注
  watchIngressWithoutClass: false


  tolerations:   # kubeadm 安装的集群默认情况下master是有污点,需要容忍这个污点才可以部署
  - key: "node-role.kubernetes.io/master"
    operator: "Equal"
    effect: "NoSchedule"

  nodeSelector:   # 固定到 k8s-master01 节点
    # kubernetes.io/hostname: k8s-master01
    edge: "true"

  service:  # HostNetwork 模式不需要创建 service
    enabled: false

  admissionWebhooks: # 强烈建议开启 admission webhook
    enabled: true
    createSecretJob:
      resources:
        limits:
          cpu: 10m
          memory: 20Mi
        requests:
          cpu: 10m
          memory: 20Mi
    patchWebhookJob:
      resources:
        limits:
          cpu: 10m
          memory: 20Mi
        requests:
          cpu: 10m
          memory: 20Mi
    patch:
      enabled: true
      image:
        repository: lotusching/ingress-nginx-kube-webhook-certgen
        tag: v1.1.1
        digest:

#tcp:  # 配置 tcp 服务
#  36379: "default/redis:6379"

defaultBackend:  # 配置默认后端
  enabled: true
  name: defaultbackend
  image:
    repository: lotusching/ingress-nginx-defaultbackend
    tag: "1.5"

# tcp:  # 配置 tcp 服务
#  27017: "default/mongo:27017"
#  36379: "default/redis:6379"

常见问题

gfw

由于,默认镜像地址被墙,这里使用个人中转后的 docker hub 镜像仓库

ingress-nginx 主要用到三个镜像

  • defaultbackend:当请求路径或域名无法匹配到对应 backend 时,使用该镜像提供的服务后端访问
  • ingress-nginx/kube-webhook-certgen:准入控制检查,用以避免因配置错误导致 Ingress 等资源对象影响控制器重新加载
  • ingress-nginx/controller:最外层的 ingress-nginx 控制器,可以理解为传统架构中的 nginx 服务角色,由它负责将请求按照 域名、路径 转发给后端应用
# nerdctl alias -> ndc
$ ndc pull registry.k8s.io/defaultbackend-amd64:1.5
$ ndc pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1
$ ndc pull registry.k8s.io/ingress-nginx/controller:v1.7.0

$ ndc tag registry.k8s.io/defaultbackend-amd64:1.5 lotusching/ingress-nginx-defaultbackend:1.5
$ ndc tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1 lotusching/ingress-nginx-kube-webhook-$ certgen:v1.1.1
$ ndc tag registry.k8s.io/ingress-nginx/controller:v1.7.0 lotusching/ingress-nginx-controller:v1.7.0

# DockerHubWebUI 创建对应 Repository,直接 push 会报错

$ ndc push lotusching/ingress-nginx-defaultbackend:1.5
$ ndc push lotusching/ingress-nginx/kube-webhook-certgen:v1.1.1
$ ndc push lotusching/ingress-nginx-controller:v1.7.0

tcp/udp 规则不生效

关于 ingress-nginx 还有个小坑,有时 helm upgrade 执行成功,但是对应的端口转发规则并没有配置生效,访问对应的端口还是提示 refuse,这时你可以按照以下思路进行排查

  1. kc get pods -n ingress-nginx 检查 ingress-nginx 控制器是否未更新,正常情况下 当 ingress-nginx 控制器启动参数发生变化 pod 是要重建的

  2. 当执行上面的命令,有可能你会发现有两个 ingress-nginx-controller-.* 较新的那个处于 pending 状态,这说明你的 nodeSelector 可能配置的有些问题,由于我们使用的 hostNetwork 模式,这就意味在使用多个副本(更新时会自动创建新副本,替换旧副本)的情况下,需要多个节点运行 ingress-nginx pod,此时若 nodeSelector 只能匹配到一个节点,那么在更新时,Kubernetes 无法找到 ingress-nginx-controller-.* 适合调度的新节点,同时默认策略也不允许删除旧节点的 pod,所以就会一直在那处于 pending 状态

    • 解决思路1:ingress-nginx-controller 副本数 < nodeSelector 数

      ① 给对应符合条件的节点打标签

      $ kc label node k8s-worker02 edge="true"
      $ kc label node k8s-worker03 edge="true"

      ② 修改 nodeSelector 匹配规则

      nodeSelector:   # 固定到 k8s-master01 节点
        # kubernetes.io/hostname: k8s-master01
        edge: "true"

      ③ 设置 ingress-nginx 控制器副本数

      controller:
        kind: Deployment
        replicaCount: 1
    • 解决思路2:调整 deployment 滚动更新策略

      controller:
        kind: Deployment
        # ingress-nginx 控制器副本数
        replicaCount: 2
        # 必须有一个副本处于可用状态
        minAvailable: 1
        # 更新策略
        # 下面两条参数配置结合在一起的效果,Terminate 一个、Create 一个,等这个 Running 后,再 Terminate 下一个,Pending 一个,直到全部更新完毕
        updateStrategy:
          rollingUpdate:
            # 不允许创建超出数量限制的 pod 副本
            maxSurge: 0
            # 更新期间允许一个 pod 不可用
            maxUnavailable: 1

      滚动更新输出,总体的过程

日志格式、重复

当 ingress-nginx 以多副本形式运行时,每个实例都会收到请求日志

# /etc/hosts
39.104.14.117 mini.yo-yo.

$ kc get pods -n ingress-nginx -o wide                                          
NAME                                            READY   STATUS    RESTARTS   AGE    IP            NODE           NOMINATED NODE   READINESS GATES
ingress-nginx-controller-7fb8fc7fb8-f4dkz       1/1     Running   0          20m    172.16.0.9    k8s-worker03   <none>           <none>
ingress-nginx-controller-7fb8fc7fb8-hz988       1/1     Running   0          20m    172.16.0.10   k8s-worker02   <none>           <none>

curl 请求

$ curl https://mini.yo-yo.fun
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
# ...

$ redis-cli -h 39.104.14.117 -p 36379 -a 123123
39.104.14.117:36379> keys *
1) "key1"

日志控制台

# 终端会话 A
$ kc logs -f ingress-nginx-controller-7fb8fc7fb8-f4dkz --tail 0 -n ingress-nginx 
47.115.121.119 - - [28/Apr/2023:04:34:06 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.29.0" 78 0.002 [default-my-nginx-80] [] 10.244.2.20:80 615 0.002 200 086a4571689f1e88144355cc9b302dbb
[47.115.121.119] [28/Apr/2023:04:36:47 +0000] TCP 200 20343 64 12.155


# 终端会话 B
$ kc logs -f ingress-nginx-controller-7fb8fc7fb8-f4dkz --tail 0 -n ingress-nginx
47.115.121.119 - - [28/Apr/2023:04:34:06 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.29.0" 78 0.002 [default-my-nginx-80] [] 10.244.2.20:80 615 0.002 200 086a4571689f1e88144355cc9b302dbb
[47.115.121.119] [28/Apr/2023:04:36:47 +0000] TCP 200 20343 64 12.155

这里需要注意下,日志采集时需要做去重处理,还有 tcp、http 日志格式的问题

参考资料


文章作者: Da
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 Da !
  目录