k8s deployment kong (third edition)

k8s deployment kong (version 3)

Official website address

The new version of kong adopts no db mode, and stores all configurations in etcd in the form of k8s resources

1. Install kong

create namespace

kubectl create namespace kong

deploy

wget https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/v2.9.3/deploy/single/all-in-one-dbless.yaml

First modify the service type of kong-proxy to nodeport, of course, you can also use LoadBalancer, modify all-in-one-dbless.yaml, and find the service section of kong-proxy

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
  name: kong-proxy
  namespace: kong
spec:
  ports:
  - name: proxy
    port: 80
    protocol: TCP
    targetPort: 8000
  - name: proxy-ssl
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    app: proxy-kong
    # type: LoadBalancer
  type: NodePort

Then add port 9080. The downloaded files only support http and https protocol service access by default. We need to add the port for grpc access

Find the deploy of proxy-kong and make the following changes

 containers:
      -env:
        - name: KONG_PROXY_LISTEN
          value: 0.0.0.0:8000 reuseport backlog=16384, 0.0.0.0:9080 http2 reuseport backlog=16384, 0.0.0.0:8443 http2 ssl reuseport
            backlog=16384
        - name: KONG_PORT_MAPS
          value: 80:8000, 9080:9080, 443:8443

Mainly add 0.0.0.0:9080 http2 reuseport backlog=16384, and 9080:9080,

Then find the port and make the following changes

 name: proxy
        ports:
        - containerPort: 8000
          name: proxy
          protocol: TCP
        - containerPort: 8443
          name: proxy-ssl
          protocol: TCP
        - containerPort: 8100
          name: metrics
          protocol: TCP
        - containerPort: 9080
          name: grpc
          protocol: TCP

Mainly adding the last three lines

Then find the service of kong-proxy and make the following changes

spec:
  ports:
  - name: proxy
    port: 80
    protocol: TCP
    targetPort: 8000
  - name: proxy-ssl
    port: 443
    protocol: TCP
    targetPort: 8443
  - name: grpc
    port: 9080
    protocol: TCP
    targetPort: 9080

Also add the last four lines, so that after modification, the call of grpc is supported (if the development side uses kong as the call gateway of grpc, some don’t use the gateway, use the registration center consul, etcd)

then create

kubectl apply -f all-in-one-dbless.yaml

Check

kubectl get all -n kong


NAME READY STATUS RESTARTS AGE
pod/ingress-kong-66ffc7f58-ffqgt 1/1 Running 0 60m
pod/proxy-kong-5b968f958f-87nrq 1/1 Running 0 60m
pod/proxy-kong-5b968f958f-mrsbv 1/1 Running 0 60m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-admin ClusterIP None <none> 8444/TCP 60m
service/kong-proxy NodePort 192.168.252.190 <none> 80:32248/TCP,443:30683/TCP 60m
service/kong-validation-webhook ClusterIP 192.168.254.177 <none> 443/TCP 60m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-kong 1/1 1 1 60m
deployment.apps/proxy-kong 2/2 2 2 60m

NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-kong-66ffc7f58 1 1 1 60m
replicaset.apps/proxy-kong-5b968f958f 2 2 2 60m

2. Create ingress

Let me talk about the whole request process first:

Use the cs.test.com domain name to resolve the load-balanced ip, and then configure the k8s node ip plus the kong-proxy service port in the load-balanced listener, which is the service of the nodeport type we changed earlier, and the ingressclass allows Ingress and The Ingress Controller (kong-proxy is the proxy process of the Ingress Controller) is bound together, and then the Ingress Controller matches the ingress of kong-ing according to the domain name, and then matches the corresponding backend service according to the uri, so as to realize traffic scheduling

Ingress Class is a Kubernetes object. Its role is to manage the concepts of Ingress and Ingress Controller, which is convenient for us to group routing rules and reduce maintenance costs. Ingress Class makes the binding relationship between Ingress and Ingress Controller more flexible. Multiple Ingress objects can be grouped into the same Ingress Class. Each Ingress Class can use a different Ingress Controller to process the Ingress objects it manages. . In short, the role of Ingress Class is to release the strong binding relationship between Ingress and Ingress Controller, so that Kubernetes users can turn to manage Ingress Class, use it to define different business logic groups, and simplify the complexity of Ingress rules.

kong-proxy is the proxy process in Kong Ingress Controller, which is used to handle traffic forwarding and routing functions. The Ingress Controller is the component in Kubernetes responsible for managing Ingress resources and controlling the incoming and outgoing traffic of the cluster. As a kind of Ingress Controller, Kong Ingress Controller converts Ingress rules into routing rules and plug-in configuration of Kong proxy service by listening to Ingress objects and custom resource definitions (CRDs) in Kubernetes, so as to realize the management and management of inbound and outbound traffic of Kubernetes clusters control. Therefore, kong-proxy is part of the Kong Ingress Controller and is responsible for realizing the functions of the Ingress Controller.

vim cs-test-com-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kong-ing
  namespace: dev
spec:
  ingressClassName: kong
  rules:
  - host: cs.test.com
    http:
      paths:
      - path: /customer
        pathType: Prefix
        backend:
          service:
            name: customer-service
            port:
              number: 20008

The following deploy and service depend on my own business. If you want to test, you can directly start a deploy and service of nginx, and then change the path of the above ingress to /, and change the port to 80 , the effect is the same as

customer-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: customer
  labels:
    app: customer
spec:
  selector:
    matchLabels:
      app: customer
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  # minReadySeconds: 30
  template:
    metadata:
      labels:
        app: customer
        tag: kobe
    spec:
      containers:
        -name: customer
          image: ccr.ccs.tencentyun.com/chens/kobe:jenkins-kobe-customer-dev-4-1b2fe90f6
          imagePullPolicy: IfNotPresent
          volumeMounts: # Mount the configmap to the directory
          - name: config-kobe
            mountPath: /biz-code/configs/
          env:
            - name: TZ
              value: "Asia/Shanghai"
            -name: LANG
              value: C.UTF-8
            - name: LC_ALL
              value: C.UTF-8
          livenessProbe:
            failureThreshold: 2
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            grpc:
              port: 21008
            timeoutSeconds: 2
          ports:
          - containerPort: 20008
            protocol: TCP
          - containerPort: 21008
            protocol: TCP
          readinessProbe:
            failureThreshold: 2
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            httpGet:
              path: /Health
              port: 20008
              scheme: HTTP
            timeoutSeconds: 2
          resources:
            limits:
              cpu: 194m
              memory: 170Mi
            requests:
              cpu: 80m
              memory: 50Mi
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: qcloudregistrykey
      restartPolicy: Always
      securityContext: {}
      serviceAccountName: default
      volumes: # reference configmap
      - name: config-kobe
        configMap:
          name: config-kobe

customer-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: customer
  name: customer-service
  namespace: dev
spec:
  ports:
    - name: http
      protocol: TCP
      port: 20008
      targetPort: 20008
    - name: grpc
      protocol: TCP
      port: 21008
      targetPort: 21008
  selector:
    app: customer
  sessionAffinity: None
  type: NodePort

This installation method is much simpler than before, but the principle inside is relatively more complicated, and it is necessary to understand their workflow

In a k8s cluster, create multiple kongs (there are still some problems in the following article, it is recommended to deploy only one kong for a cluster, and then use multiple ingresses to proxy different domain names, that is, different environments)

For example, your development, testing, and acceptance are in the same k8s cluster, and different namespaces are used to distinguish them. For example, a kong has been deployed in the dev namespace according to the above method, and when deploying the dev namespace kong, Many cluster-related roles or permissions have been created, and there will be some conflicts at this time, so you can create multiple kongs under multiple namespaces through the following steps

1. Download file

wget https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/v2.9.3/deploy/single/all-in-one-dbless.yaml

2. Modify the file

Find the place of ClusterRoleBinding, and then add the ServiceAccount of the corresponding environment in subjects. There are three places in total, and add the namespace account you need in each place

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kong-ingress
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kong-ingress
subjects:
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: kong
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kong-ingress-gateway
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kong-ingress-gateway
subjects:
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: kong
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kong-ingress-knative
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kong-ingress-knative
subjects:
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: kong
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: test

3. Deployment

Deploy all-in-one-dbless.yaml first

kubectl create -f all-in-one-dbless.yaml

At this time, a kong will be created under the kong namespace

kubectl get all -n kong

NAME READY STATUS RESTARTS AGE
pod/ingress-kong-66ffc7f58-x4l2j 1/1 Running 0 8m42s
pod/proxy-kong-5b968f958f-hz6tl 1/1 Running 0 8m42s
pod/proxy-kong-5b968f958f-sqxhd 1/1 Running 0 8m42s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-admin ClusterIP None <none> 8444/TCP 8m42s
service/kong-proxy NodePort 192.168.253.97 <none> 80:31231/TCP,443:30508/TCP 8m42s
service/kong-validation-webhook ClusterIP 192.168.253.22 <none> 443/TCP 8m42s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-kong 1/1 1 1 8m42s
deployment.apps/proxy-kong 2/2 2 2 8m42s

NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-kong-66ffc7f58 1 1 1 8m42s
replicaset.apps/proxy-kong-5b968f958f 2 2 2 8m42s

Then, according to the following file, you can deploy a new kong in the test namespace (this file is actually the second half of all-in-one-dbless.yaml intercepted)

test-all-in-one-dbless.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kong-serviceaccount
  namespace: test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: kong-leader-election
  namespace: test
rules:
- apiGroups:
  - ""
  - coordination.k8s.io
  resources:
  - configmaps
  - leases
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - patch
  - delete
- apiGroups:
  - ""
  resources:
  -events
  verbs:
  - create
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kong-leader-election
  namespace: test
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kong-leader-election
subjects:
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: test
---
apiVersion: v1
kind: Service
metadata:
  name: kong-admin
  namespace: test
spec:
  clusterIP: None
  ports:
  -name: admin
    port: 8444
    protocol: TCP
    targetPort: 8444
  selector:
    app: proxy-kong
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
  name: kong-proxy
  namespace: test
spec:
  ports:
  - name: proxy
    port: 80
    protocol: TCP
    targetPort: 8000
  - name: proxy-ssl
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    app: proxy-kong
    # type: LoadBalancer
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  name: kong-validation-webhook
  namespace: test
spec:
  ports:
  - name: webhook
    port: 443
    protocol: TCP
    targetPort: 8080
  selector:
    app: ingress-kong
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: ingress-kong
  name: ingress-kong
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ingress-kong
  template:
    metadata:
      annotations:
        kuma.io/gateway: enabled
        kuma.io/service-account-token-volume: kong-service-account-token
        traffic.sidecar.istio.io/includeInboundPorts: ""
      labels:
        app: ingress-kong
    spec:
      automountServiceAccountToken: false
      containers:
      -env:
        - name: CONTROLLER_KONG_ADMIN_SVC
          value: kong/kong-admin
        - name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
          value: "true"
        - name: CONTROLLER_PUBLISH_SERVICE
          value: kong/kong-proxy
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: kong/kubernetes-ingress-controller:2.9.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: ingress-controller
        ports:
        - containerPort: 8080
          name: webhook
          protocol: TCP
        - containerPort: 10255
          name: cmetrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          name: kong-serviceaccount-token
          readOnly: true
      serviceAccountName: kong-serviceaccount
      volumes:
      - name: kong-serviceaccount-token
        projected:
          sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              items:
              - key: ca.crt
                path: ca.crt
              name: kube-root-ca.crt
          - downwardAPI:
              items:
              - fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
                path: namespace
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: proxy-kong
  name: proxy-kong
  namespace: test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: proxy-kong
  template:
    metadata:
      annotations:
        kuma.io/gateway: enabled
        kuma.io/service-account-token-volume: kong-service-account-token
        traffic.sidecar.istio.io/includeInboundPorts: ""
      labels:
        app: proxy-kong
    spec:
      automountServiceAccountToken: false
      containers:
      -env:
        - name: KONG_PROXY_LISTEN
          value: 0.0.0.0:8000 reuseport backlog=16384, 0.0.0.0:8443 http2 ssl reuseport
            backlog=16384
        - name: KONG_PORT_MAPS
          value: 80:8000, 443:8443
        - name: KONG_ADMIN_LISTEN
          value: 0.0.0.0:8444 http2 ssl reuseport backlog=16384
        - name: KONG_STATUS_LISTEN
          value: 0.0.0.0:8100
        - name: KONG_DATABASE
          value: "off"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "2"
        - name: KONG_KIC
          value: "on"
        - name: KONG_ADMIN_ACCESS_LOG
          value: /dev/stdout
        - name: KONG_ADMIN_ERROR_LOG
          value: /dev/stderr
        - name: KONG_PROXY_ERROR_LOG
          value: /dev/stderr
        - name: KONG_ROUTER_FLAVOR
          value: traditional
        image: kong:3.2
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/bash
              - -c
              - kong quit
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /status
            port: 8100
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: proxy
        ports:
        - containerPort: 8000
          name: proxy
          protocol: TCP
        - containerPort: 8443
          name: proxy-ssl
          protocol: TCP
        - containerPort: 8100
          name: metrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /status
            port: 8100
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
      serviceAccountName: kong-serviceaccount
      volumes:
      - name: kong-serviceaccount-token
        projected:
          sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              items:
              - key: ca.crt
                path: ca.crt
              name: kube-root-ca.crt
          - downwardAPI:
              items:
              - fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
                path: namespace
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: kong-test
spec:
  controller: ingress-controllers.konghq.com/kong

3. View

It can be seen that it is the same as the kong under the kong namespace, and they share the same cluster role

kubectl get all -n test


NAME READY STATUS RESTARTS AGE
pod/ingress-kong-66ffc7f58-8b8kk 1/1 Running 0 2m44s
pod/proxy-kong-5b968f958f-vg7nh 1/1 Running 0 2m44s
pod/proxy-kong-5b968f958f-zkww7 1/1 Running 0 2m44s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-admin ClusterIP None <none> 8444/TCP 2m44s
service/kong-proxy NodePort 192.168.254.246 <none> 80:30101/TCP,443:31093/TCP 2m44s
service/kong-validation-webhook ClusterIP 192.168.252.220 <none> 443/TCP 2m44s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-kong 1/1 1 1 2m44s
deployment.apps/proxy-kong 2/2 2 2 2m44s

NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-kong-66ffc7f58 1 1 1 2m44s
replicaset.apps/proxy-kong-5b968f958f 2 2 2 2m44s

4. Test

The test is the same as above, just create an ingress, and then create a deploy and service