Kubernetes cluster services exposed Nginx Ingress Controller

Kubernetes cluster service exposes Nginx Ingress Controller

  • 1. Ingress controller
    • 1.1 The role of the ingress controller
    • 1.2 Types of ingress controllers
      • 1.2.1 Kubernetes Ingress Controller
      • 1.2.2 NGINX Ingress Controller
      • 1.2.3 Kong Ingress
      • 1.2.4 Traefik
      • 1.2.5 HAProxy Ingress
      • 1.2.6 Voyager
      • 1.2.7 Contours
      • 1.2.8 Istio Ingress
      • 1.2.9 Ambassador
      • 1.2.10 Gloo
      • 1.2.11 Skipper
  • 2. nginx ingress controller
    • 2.1 nginx ingress controller location
    • 2.2 nginx ingress controller deployment
      • 2.2.1 Download and modify the configuration file
    • 2.3 Ingress object application case
      • 2.3 1 ingress-http case
        • 2.3.1.1 Create a deployment controller type application
        • 2.3.1.2 Create service
        • 2.3.1.3 Create an ingress object
        • 2.3.1.4 Simulate client access
      • 2.3.2 ingress-http case extension
        • 2.3.2.1 Create the first application
        • 2.3.2.2 Create a second application
        • 2.3.2.3 Create an ingress object
        • 2.3.1.4 Simulate client access
      • 2.3.3 ingress-https case
        • 2.3.3.1 Create a self-signed certificate
        • 2.3.3.2 Create a certificate as a secret
        • 2.3.3.3 Arranging YAML and creating
        • 2.3.3.4 Simulate client access
      • 2.3.4 ingress + nodeport service

1. ingress controller

1.1 Ingress controller function

The ingress controller can provide proxy services for users outside the kubernetes cluster to access pods inside the kubernetes cluster.

  • Provide global access proxy
  • access process
    • user –> ingress controller –> service –> pod

1.2 Types of ingress controllers

1.2.1 Kubernetes Ingress Controller

  • Reference link: http://github.com/nginxinc/kubernetes-ingress
  • Implementation: Go/Lua (nginx is written in C)
  • License: Apache 2.0
  • The “official” controller for Kubernetes (we call it official to distinguish it from the NGINX corporate controller). This is a community-developed controller based on the nginx web server and supplemented with a set of Lua plugins for additional functionality.
  • Due to the popularity of NGINX and the fact that it requires less modification to use it as a controller, it is probably the easiest and most straightforward choice for the average K8s engineer.

1.2.2 NGINX Ingress Controller

  • Reference link: http://github.com/kubernetes/ingress-nginx
  • Implementation: Go
  • License: Apache 2.0
  • This is the official product developed by NGINX Corporation, it also has a commercial version based on NGINX Plus. NGINX’s controller has high stability, continuous backward compatibility, and no third-party modules.
  • Due to the elimination of the Lua code, it guarantees a higher speed compared to the official controller, but is therefore more limited. In comparison, its paid version has a wider range of additional features, such as real-time metrics, JWT validation, active health checks, etc.
  • The important advantage of NGINX Ingress is the comprehensive support for TCP/UDP traffic, and the main disadvantage is the lack of traffic distribution function.

1.2.3 Kong Ingress

  • Reference link: http://github.com/Kong/kubernetes-ingress-controller
  • Implementation: Go
  • License: Apache 2.0
  • Kong Ingress is developed by Kong Inc and has two versions: commercial and free. It is built on top of NGINX and adds Lua modules that extend its functionality.
  • Initially, Kong Ingress was primarily used as an API gateway for processing and routing of API requests. Now, it is a full-fledged Ingress controller, main advantage is the large number of add-on modules, plugins (including third-party plugins) that are easy to install and configure. It was the first controller to have a large number of additional functions, and its built-in functions also opened up many possibilities. Kong Ingress configuration is performed using CRDs.
  • An important feature of Kong Ingress is that it can only run in one environment (it does not support cross namespaces). This is a somewhat contentious topic: some see it as a drawback, since instances must be spawned for each environment; others see it as a special feature, as it is a higher level of isolation from controller failures. The impact is limited to the environment in which it is placed.

1.2.4 Traefik

  • Reference link: http://github.com/containous/traefik
  • Implementation: Go
  • License: MIT
  • Originally, this proxy was created for the routing of microservice requests and their dynamic environments, so it has many useful features:Continuous update of configuration (without restarting), support for multiple load balancing algorithms, web UI, metrics export , support protocols for various services, REST API, Canary version, etc.
  • Support for Let’s Encrypt out of the box is another nice feature, but its main drawback is also obvious, which is that for the high availability of the controller, you must install and connect its key-value store.
  • In Traefik v2.0 released in September 2019, although it adds many nice new features, such as TCP/SSL with SNI, canary deployment, traffic mirroring/shadowing and improved Web UI, but some features (such as WAF support) are still in planning discussions.
  • At the same time as the new version, there is also a service grid called Mesh, which is built on top of Traefik, and the access to kubernetes internal services is controlled and monitored.

1.2.5 HAProxy Ingress

  • Reference link: http://github.com/jcmoraisjr/haproxy-ingress
  • Implementation: Go (HAProxy is written in C)
  • License: Apache 2.0
  • HAProxy is a well known proxy server and load balancer. As part of a Kubernetes cluster, it provides “soft” configuration updates (no traffic loss), DNS-based service discovery, and dynamic configuration via API. HAProxy also supports fully custom configuration file templates (by replacing ConfigMap) and using Spring Boot functions in them.
  • Typically, engineers will focus on high speed, optimization and efficiency of consumed resources. One of the advantages of HAProxy is that it supports a large number of load balancing algorithms. It is worth mentioning that in v2.0 released in June 2020, HAProxy added many new features, and its upcoming v2.1 is expected to bring more new features (including OpenTracing support).

1.2.6 Voyager

  • Reference link: http://github.com/appscode/voyager
  • Implementation: Go
  • License: Apache 2.0
  • Voyager is based on HAProxy and is offered as a general solution to a large number of providers. Its most representative functions include traffic load balancing on L7 and L4, among which TCP L4 traffic load balancing can be regarded as one of the most critical functions of this solution.
  • Earlier in 2020, although Voyager rolled out full support for HTTP/2 and gRPC protocols in v9.0.0, in general, support for certificate management (Let’s Encrypt certificates) is still the most prominent Voyager integration new features.

1.2.7 Contour

  • Reference link: http://github.com/heptio/contour
  • Implementation: Go
  • License: Apache 2.0
  • Contour and Envoy are developed by the same author and it is based on Envoy. Its most special function is that it can manage Ingress resources through CRD (IngressRoute). For organizations that need to use one cluster at the same time, this helps to protect the traffic in adjacent environments and make them Immune to Ingress resource changes.
  • It also provides an extended set of load balancing algorithms (mirroring, auto-repeat, limit request rate, etc.), as well as detailed traffic and failure monitoring. Its lack of support for sticky sessions could be a serious flaw for some engineers.

1.2.8 Istio Ingress

  • Reference link: http://istio.io/docs/tasks/traffic-management/ingress
  • Implementation: Go
  • License: Apache 2.0
  • Istio, a joint development project of IBM, Google and Lyft, is a comprehensive service mesh solution – not only manages all incoming external traffic (as an Ingress controller), but also controls all Traffic.
  • Istio uses Envoy as a secondary proxy for each service. Essentially, it’s a large processor that can do just about anything, with maximum control, scalability, security, and transparency at its core.
  • With Istio Ingress, you can optimize traffic routing, access authorization between services, balancing, monitoring, canary releases, and more.

1.2.9 Ambassador

  • Reference link: http://github.com/datawire/ambassador
  • Implementation: Python
  • License: Apache 2.0
  • Ambassador, also an Envoy-based solution, is available in both free and commercial versions.
  • Ambassador is called “Kubernetes Native API Microservices Gateway”, it is tightly integrated with K8s primitives, has the feature pack you would expect from an Ingress controller, it also works with various service mesh solutions like Linkerd, Istio etc. to use together.
  • Incidentally, the Ambassador blog recently published the results of a benchmark comparing the base performance of Envoy, HAProxy, and NGINX.

1.2.10 Gloo

  • Reference link: http://github.com/solo-io/gloo
  • Implementation: Go
  • License: Apache 2.0
  • Gloo is a new piece of software built on top of Envoy (released March 2018), also known as a “function gateway” due to its author’s insistence that “gateways should build APIs out of functions, not services”. Its “function-level routing” means that it can route traffic for a mix of applications whose backend implementations are microservices, serverless functions, and legacy applications.
  • Due to its pluggable architecture, Gloo provides most of the features that engineers expect, but some of these features are only available in its commercial version (Gloo Enterprise).

1.2.11 Skipper

  • Reference link: http://github.com/zalando/skipper
  • Implementation: Go
  • License: Apache 2.0
  • Skipper is an HTTP router and reverse proxy, so it does not support various protocols. Technically, it uses the Endpoints API (rather than Kubernetes Services) to route traffic to pods. Its strength lies in the advanced HTTP routing capabilities provided by its rich set of filters, through which engineers can create, update and delete all HTTP data.
  • Skipper’s routing rules can be updated without downtime. As stated by its author, Skipper works well with other solutions, such as AWS ELB.

2. nginx ingress controller

2.1 nginx ingress controller location

  • Reference link: https://www.nginx.com/products/nginx/kubernetes-ingress-controller

2.2 nginx ingress controller deployment

  • Project address: https://github.com/kubernetes/ingress-nginx

2.2.1 Download and modify the configuration file

[root@k8s-master1 ~]# curl -k https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml -o deploy.yaml

339 type: NodePort
Change line 339 to LoadBalancer

[root@k8s-master1 ~]# kubectl apply -f deploy.yaml

2.3 Ingress object application case

2.3 1 ingress-http case

Name-Based Load Balancing

2.3.1.1 Create a deployment controller type application

[root@k8s-master1 ~]# vim nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: ingress-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: c1
        image: nginx:1.15-alpine
        imagePullPolicy: IfNotPresent

Apply YAML

[root@k8s-master1 ~]# kubectl apply -f nginx.yml
deployment.extensions/nginx created

Verify pods

[root@k8s-master1 ~]# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-79654d7b8-nhxpm 1/1 Running 0 12s
nginx-79654d7b8-tp8wg 1/1 Running 0 13s
nginx-ingress-controller-77db54fc46-kwwkt 1/1 Running 0 11m

2.3.1.2 Create service

[root@k8s-master1 ~]# vim nginx-service.yml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: ingress-nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx

Apply YAML

[root@k8s-master1 ~]# kubectl apply -f nginx-service.yml
service/nginx-service created

verify service

[root@k8s-master1 ~]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service ClusterIP 10.2.115.144 <none> 80/TCP 5s

2.3.1.3 Create an ingress object

[root@k8s-master1 ~]# vim ingress-nginx.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginx #custom ingress name
  namespace: ingress-nginx
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: www.test.com # custom domain name
    http:
      paths:
      -pathType: Prefix
        path: "/"
        backend:
          service:
            name: nginx-service # corresponds to the service name created above
            port:
              number: 80

Apply YAML

[root@k8s-master1 ~]# kubectl apply -f ingress-nginx.yaml
ingress.extensions/ingress-nginx created

Verify ingress

[root@k8s-master1 ~]# kubectl get ingress -n ingress-nginx
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-nginx <none> www.test.com 192.168.10.12 80 113s

Description View ingress information

[root@k8s-master1 ~]# kubectl describe ingress ingress-nginx -n ingress-nginx
Name: ingress-nginx
Namespace: ingress-nginx
Address: 192.168.10.12
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host Path Backends
  ---- ---- --------
  www.test.com
                   /nginx-service:80 (10.244.159.160:80,10.244.194.110:80)
Annotations: kubernetes.io/ingress.class: nginx
Events:
  Type Reason Age From Message
  ---- ------- ---- ---- -------
  Normal Sync 2m (x2 over 2m56s) nginx-ingress-controller Scheduled for sync
[root@k8s-master1 ~]# kubectl get pods -o wide -n ingress-nginx
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

nginx-646d5c7b67-mpw9r 1/1 Running 0 4m15s 10.244.194.110 k8s-worker1 <none> <none>
nginx-646d5c7b67-v99gz 1/1 Running 0 4m15s 10.244.159.160 k8s-master1 <none> <none>
You can see that the IPs of the two pods correspond exactly to the IPs corresponding to the ingress domain name

Confirm that the podIP of nginx-ingress-controller is 192.168.10.91

2.3.1.4 Simulate client access

  1. Confirm the podIP of nginx-ingress-controller, the query result of the following command is 192.168.10.91
[root@k8s-master1 ~]# kubectl get svc -n ingress-nginx |grep ingress
ingress-nginx-controller LoadBalancer 10.96.183.188 192.168.10.91 80:32369/TCP,443:31775/TCP 11m
ingress-nginx-controller-admission ClusterIP 10.96.212.14 <none> 443/TCP 11m
  1. Add the above domain name and IP address resolution to any host outside the cluster (simulating public network DNS)
[root@otherhost ~]# vim /etc/hosts

192.168.10.91 www.test.com
  1. Prepare the web home page for the container running in the pod
[root@k8s-master1 ~]# kubectl get pods -n ingress-nginx
nginx-646d5c7b67-mpw9r 1/1 Running 0 8m34s
nginx-646d5c7b67-v99gz 1/1 Running 0 8m34s

[root@k8s-master1 ~]# kubectl exec -it nginx-646d5c7b67-mpw9r -n ingress-nginx -- /bin/sh
/ # echo "ingress web1" > /usr/share/nginx/html/index.html
/ # exit

[root@k8s-master1 ~]# kubectl exec -it nginx-646d5c7b67-v99gz -n ingress-nginx -- /bin/sh
/ # echo "ingress web2" > /usr/share/nginx/html/index.html
/ # exit
  1. Access and result display
[root@test ~]# curl www.test.com
ingress web1
[root@test ~]# curl www.test.com
ingress web2

2.3.2 ingress-http case extension

URI-based load balancing

2.3.2.1 Create your first application

[root@k8s-master1 ~]# vim nginx-uri-1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-uri-1
  namespace: ingress-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-uri-1
  template:
    metadata:
      labels:
        app: nginx-uri-1
    spec:
      containers:
      - name: c1
        image: nginx:1.15-alpine
        imagePullPolicy: IfNotPresent
[root@k8s-master1 ~]# vim nginx-service-uri-1.yml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-uri-1
  namespace: ingress-nginx
  labels:
    app: nginx-uri-1
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx-uri-1
# kubectl apply -f nginx-uri-1.yaml
# kubectl apply -f nginx-service-uri-1.yaml

2.3.2.2 Create a second application

[root@k8s-master1 ~]# vim nginx-uri-2.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-uri-2
  namespace: ingress-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-uri-2
  template:
    metadata:
      labels:
        app: nginx-uri-2
    spec:
      containers:
      - name: c1
        image: nginx:1.15-alpine
        imagePullPolicy: IfNotPresent
[root@k8s-master1 ~]# vim nginx-service-uri-2.yml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-uri-2
  namespace: ingress-nginx
  labels:
    app: nginx-uri-2
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx-uri-2
# kubectl apply -f nginx-uri-2.yaml
# kubectl apply -f nginx-service-uri-2.yaml
# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service-uri-1 ClusterIP 10.96.171.135 <none> 80/TCP 7m24s
nginx-service-uri-2 ClusterIP 10.96.234.164 <none> 80/TCP 4m11s

2.3.2.3 Create an ingress object

[root@k8s-master1 ~]# vim ingress-nginx.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-uri
  namespace: ingress-nginx
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: www.testuri.com
    http:
      paths:
      - path: /svc1
        pathType: Prefix
        backend:
          service:
            name: nginx-service-uri-1
            port:
              number: 80
      - path: /svc2
        pathType: Prefix
        backend:
          service:
            name: nginx-service-uri-2
            port:
              number: 80

Apply YAML

[root@master1 ~]# kubectl apply -f ingress-nginx-uri.yaml
ingress.networking.k8s.io/ingress-uri created

Verify ingress

[root@master1 ~]# kubectl get ingress -n ingress-nginx
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-uri <none> www.testuri.com 80 13s

Description View ingress information

[root@master1 ~]# kubectl describe ingress ingress-uri -n ingress-nginx
Name: ingress-uri
Namespace: ingress-nginx
Address: 192.168.10.12
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host Path Backends
  ---- ---- --------
  www.testuri.com
                      /svc1 nginx-service-uri-1:80 (10.244.159.158:80,10.244.194.111:80)
                      /svc2 nginx-service-uri-2:80 (10.244.159.159:80,10.244.194.112:80)
Annotations: kubernetes.io/ingress.class: nginx
Events:
  Type Reason Age From Message
  ---- ------- ---- ---- -------
  Normal Sync 4s (x2 over 32s) nginx-ingress-controller Scheduled for sync
[root@k8s-master1 ~]# kubectl get pods -o wide -n ingress-nginx
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

nginx-uri-1-7d7d75f86-dws96 1/1 Running 0 14m 10.244.159.158 k8s-master1 <none> <none>
nginx-uri-1-7d7d75f86-s8js4 1/1 Running 0 14m 10.244.194.111 k8s-worker1 <none> <none>
nginx-uri-2-7cdf7f89b7-8s4mg 1/1 Running 0 10m 10.244.194.112 k8s-worker1 <none> <none>
nginx-uri-2-7cdf7f89b7-gj8x6 1/1 Running 0 10m 10.244.159.159 k8s-master1 <none> <none>

Confirm that the podIP of nginx-ingress-controller is 192.168.10.91

2.3.1.4 Simulate client access

  1. Confirm the podIP of nginx-ingress-controller, the query result of the following command is 192.168.10.91
[root@k8s-master1 ~]# kubectl get svc -n ingress-nginx |grep ingress
ingress-nginx-controller LoadBalancer 10.96.183.188 192.168.10.91 80:32369/TCP,443:31775/TCP 11m
ingress-nginx-controller-admission ClusterIP 10.96.212.14 <none> 443/TCP 11m
  1. Add the above domain name and IP address resolution to any host outside the cluster (simulating public network DNS)
[root@otherhost ~]# vim /etc/hosts
192.168.10.91 www.testuri.com
  1. Prepare the web home page for the container running in the pod
[root@k8s-master1 ~]# kubectl exec -it nginx-uri-1-7d7d75f86-dws96 -n ingress-nginx -- /bin/sh
/ # mkdir /usr/share/nginx/html/svc1
/ # echo "sssvc1" > /usr/share/nginx/html/svc1/index.html
/ # exit
[root@k8s-master1 ~]# kubectl exec -it nginx-uri-1-7d7d75f86-s8js4 -n ingress-nginx -- /bin/sh
/ # mkdir /usr/share/nginx/html/svc1
/ # echo "sssvc1" > /usr/share/nginx/html/svc1/index.html
/ # exit
[root@k8s-master1 ~]# kubectl exec -it nginx-uri-2-7cdf7f89b7-8s4mg -n ingress-nginx -- /bin/sh
/ # mkdir /usr/share/nginx/html/svc2
/ # echo "sssvc2" > /usr/share/nginx/html/svc1/index.html
/ # exit
[root@k8s-master1 ~]# kubectl exec -it nginx-uri-2-7cdf7f89b7-gj8x6 -n ingress-nginx -- /bin/sh
/ # mkdir /usr/share/nginx/html/svc2
/ # echo "sssvc2" > /usr/share/nginx/html/svc1/index.html
/ # exit
  1. Access and result display
[root@otherhost ~]# curl www.testuri.com/svc1/index.html
sssvc1
[root@otherhost ~]# curl www.testuri.com/svc2/index.html
sssvc2

2.3.3 ingress-https case

2.3.3.1 Creating a self-signed certificate

[root@k8s-master1 ~]# mkdir ingress-https
[root@k8s-master1 ~]# cd ingress-https/
[root@k8s-master1 ingress-https]# openssl genrsa -out nginx.key 2048
[root@k8s-master1 ingress-https]# openssl req -new -x509 -key nginx.key -out nginx.pem -days 365
?…
?…
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:GD
Locality Name (eg, city) [Default City]: SZ
Organization Name (eg, company) [Default Company Ltd]:IT
Organizational Unit Name (eg, section) []: it
Common Name (eg, your name or your server's hostname) []:test123
Email Address []:[email protected]
[root@k8s-master1 ingress-https]# ls
nginx.key nginx.pem

2.3.3.2 Create a certificate as a secret

[root@k8s-master1 ingress-https]# kubectl create secret tls nginx-tls-secret --cert=nginx.pem --key=nginx.key -n ingress-nginx
secret/nginx-tls-secret created
[root@k8s-master1 ingress-https]# kubectl get secrets -n ingress-nginx |grep nginx-tls-secret
nginx-tls-secret kubernetes.io/tls 2 38s

2.3.3.3 Arranging YAML and creating

[root@k8s-master1 ingress-https]# vim ingress-https.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx2
  namespace: ingress-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx2
  template:
    metadata:
      labels:
        app: nginx2
    spec:
      containers:
      - name: c1
        image: nginx:1.15-alpine
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service2
  namespace: ingress-nginx
  labels:
    app: nginx2
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 443
  selector:
    app: nginx2
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginx2
  namespace: ingress-nginx
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
    kubernetes.io/ingress.class: nginx
spec:
  tls:
  -hosts:
    - www.test123.com # domain name
    secretName: nginx-tls-secret # Call the previously created secret
  rules:
  - host: www.test123.com # domain name
    http:
      paths:
      -pathType: Prefix
        path: "/"
        backend:
          service:
            name: nginx-service2 # Corresponding service name
            port:
              number: 80
[root@k8s-master1 ingress-https]# kubectl apply -f ingress-https.yml
deployment.apps/nginx2 created
service/nginx-service2 created
ingress.extensions/ingress-nginx2 created

verify

[root@k8s-master1 ~]# kubectl get ingress -n ingress-nginx
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-nginx2 <none> www.test123.com 192.168.10.12 80, 443 2m14s

2.3.3.4 Simulate client access

[root@otherhost ~]# vim /etc/hosts

192.168.10.91 www.test123.com Add this line to simulate DNS

[root@otherhost ~]# firefox https://www.test123.com &
[1] 10892
Notes on trusted certificates:
If you need to access the services in the kubernetes cluster on the Internet to be trusted, it is recommended to use the SSL certificate applied for on the Internet.

2.3.4 ingress + nodeport service

[root@k8s-master1 ~]# vim ingress-nodeport.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx3
  namespace: ingress-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx3
  template:
    metadata:
      labels:
        app: nginx3
    spec:
      containers:
      - name: c1
        image: nginx:1.15-alpine
        imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service3
  namespace: ingress-nginx
  labels:
    app: nginx3
spec:
  type: NodePort # NodePort type service
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx3
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginx3
  namespace: ingress-nginx
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: www.test3.com # domain name
    http:
      paths:
      -pathType: Prefix
        path: "/"
        backend:
          service:
            name: nginx-service3 # Corresponding service name
            port:
              number: 80
[root@k8s-master1 ~]# kubectl apply -f ingress-nodeport.yml
root@k8s-master1 ~]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service ClusterIP 10.2.115.144 <none> 80/TCP 22h
nginx-service2 ClusterIP 10.2.237.70 <none> 80/TCP,443/TCP 22h
nginx-service3 NodePort 10.2.75.250 <none> 80:26765/TCP 3m51s
nginx-service3 is nodeport type
[root@otherhost ~]# vim /etc/hosts
192.168.10.91 www.test3.com Add this line to simulate DNS
[root@otherhost ~]# curl www.test3.com