Level 6 K8s Overcoming Strategy (2) – Stateless Service Deployment

——> Course videos are shared simultaneously on Toutiao and Bilibili

This lesson is very important. When you deal with K8S later, most of you will use deployment to publish business services. Let’s follow Brother Bo to learn about the monster deployment.

K8s will manage the life cycle of Pods through various Controllers. In order to meet different business scenarios, K8s has developed various Controllers such as Deployment, ReplicaSet, DaemonSet, StatefuleSet, Job, cronJob etc. code>, here we first learn about the most commonly used Deployment, which is the most commonly used controller in our production and is suitable for publishing stateless applications.

Let’s first run a Deployment instance:

# Create a deployment and reference the nginx service image. The default number of copies here is 1, and the nginx container image uses latest.
# Starting from the new version of K8s, the service API has been relatively sorted out, and the specific responsibilities of each API have been clarified, instead of being lumped together like the previous version.
# kubectl create deployment nginx --image=docker.io/library/nginx:1.21.6
deployment.apps/nginx created

# View creation results
# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 17s

# kubectl get rs # <-- Take a look at the replica set created by automatic association
NAME DESIRED CURRENT READY AGE
nginx-796b85dbb8 1 1 1 34s


# kubectl get pod # <-- Check the generated pod. Note that the image download takes some time. Wait patiently. Pay attention to the f89759699 of the pod name. Is it the same as the rs above? By the way, because the pod here is composed of the rs above. Created, why do we need to set up such a link? I will demonstrate it with examples later.
NAME READY STATUS RESTARTS AGE
nginx-796b85dbb8-7gxn8 0/1 ContainerCreating 0 13s

# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-796b85dbb8-7gxn8 1/1 Running 0 58s



# Expand the number of pods
# kubectl scale deployment nginx --replicas=2
deployment.apps/nginx scaled

# View the pod results after expansion
# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-796b85dbb8-7gxn8 1/1 Running 0 105s
nginx-796b85dbb8-hrlrx 1/1 Running 0 3s


# Specifically check whether the pods are running on different nodes.
# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-796b85dbb8-7gxn8 1/1 Running 0 2m7s 172.20.139.70 10.0.1.203 <none> <none>
nginx-796b85dbb8-hrlrx 1/1 Running 0 25s 172.20.217.78 10.0.1.204 <none> <none>



# Next, replace the mirror version of nginx in this deployment to explain why the rs replica set is needed. This is very important.
# Let’s first look at which version of nginx is currently. If you enter an incorrect uri, the page will print out the nginx version number.

# Let’s create a service first
# kubectl create service clusterip nginx --tcp=80:80 --dry-run=client -o yaml
apiVersion: v1
Kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  ports:
  - name: 80-80
    port: 80
    protocol:TCP
    targetPort: 80
  selector:
    app: nginx
  type:ClusterIP
status:
  loadBalancer: {<!-- -->}


nginx_svc=$(kubectl get svc --no-headers |awk '/^nginx/{print $3}')

curl ${nginx_svc}/1
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<center>nginx/1.21.6</center>
</body>
</html>

# According to the output, you can see that the version number is nginx/1.21.6. We simulate the service release operation here. First, let’s take a look at what versions of nginx there are.
ctr -n k8s.io images ls|grep nginx

# Pay attention to the `--record` parameter at the end of the command. This is an important mark for rollback of resource creation and update in production. It is strongly recommended to add this parameter when operating in production.
# kubectl set image deployment/nginx nginx=docker.io/library/nginx:1.25.1 --record
deployment.apps/nginx image updated

# Observe the pod information. You can see that the two pods of the old nginx are gradually replaced by the new pods one by one.
# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
nginx-5f5f7c68bc-csgw2 1/1 Running 0 2m18s
nginx-5f5f7c68bc-gbkl9 0/1 ContainerCreating 0 84s
nginx-796b85dbb8-hrlrx 1/1 Running 0 12m


# Let’s look at nginx’s rs again. We can see that there are two now.
#kubectl getrs
NAME DESIRED CURRENT READY AGE
nginx-5f5f7c68bc 2 2 2 5m7s
nginx-796b85dbb8 0 0 0 16m


# Look at the description information of nginx now, let’s analyze this process in detail
# kubectl describe deployments.apps nginx
Name: nginx
Namespace: default
CreationTimestamp: Thu, 26 Oct 2023 21:50:16 + 0800
Labels: app=nginx
...
RollingUpdateStrategy: 25% max unavailable, 25% max surge # Note here, this is used to control the frequency of iterative updates of new and old versions of rs. The maximum total number of copies of rolling updates (taking the base of 2 as an example): 2 + 2*25 %=2.5 --> 3, the maximum number of available copies (both default values are 25%): 2-2*25%=1.5 --> 2
...


Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal ScalingReplicaSet 16m deployment-controller Scaled up replica set nginx-796b85dbb8 to 1
  Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set nginx-796b85dbb8 to 2 from 1
  Normal ScalingReplicaSet 5m22s deployment-controller Scaled up replica set nginx-5f5f7c68bc to 1 # Start a new version of pod
  Normal ScalingReplicaSet 3m14s deployment-controller Scaled down replica set nginx-796b85dbb8 to 1 from 2 # After the above is completed, an old version will be released
  Normal ScalingReplicaSet 3m14s deployment-controller Scaled up replica set nginx-5f5f7c68bc to 2 from 1 # Then start a new version of pod
  Normal ScalingReplicaSet 65s deployment-controller Scaled down replica set nginx-796b85dbb8 to 0 from 1 # Release the last old pod



# rollback
# Remember the --record parameter we mentioned above, it will play a very important role here.
# Here we also take the nginx service as an example. Let’s first look at the current version number of nginx.

nginx_svc=$(kubectl get svc --no-headers |awk '/^nginx/{print $3}')
curl ${nginx_svc}/1

<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<center>nginx/1.25.1</center>
</body>
</html>

#Upgrade nginx version
# kubectl set image deployments/nginx nginx=nginx:1.21.6 --record

#Upgrade completed
# curl ${nginx_svc}/1
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<center>nginx/1.21.6</center>
</body>
</html>

# It is assumed here that we are releasing a new version of the service. As a result, the online feedback version has a problem and needs to be rolled back immediately. Let’s see how to operate it on K8s.
# First check the current historical version through the command. Only the command operation with the `--record` parameter will have detailed records. This is why it must be added during the operation in production.
# kubectl rollout history deployment nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deployment/nginx nginx=docker.io/library/nginx:1.25.1 --record=true
3 kubectl set image deployments/nginx nginx=nginx:1.21.6 --record=true


# Select the rollback version based on the Arabic numeral serial number in front of the historical release version. Here we return to the previous version number, that is, select 2. The execution command is as follows:
# kubectl rollout undo deployment nginx --to-revision=2
deployment.apps/nginx rolled back

# After a while, after the pod update is completed, check that the result has been rolled back. How about it? The operation in K8s is as simple as this:
# curl ${nginx_svc}/1
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<center>nginx/1.25.1</center>
</body>
</html>



Deployment is very important. Let’s review the entire deployment process here to deepen our understanding


10.0.1.203 —————————————- ——————————————10.0. 1.204

  1. kubectl sends deployment request to API Server
  2. API Server notifies Controller Manager to create a deployment resource (scale expansion)
  3. Scheduler performs scheduling tasks and distributes two replica Pods to 10.0.1.203 and 10.0.1.204
  4. The kubelets on 10.0.1.203 and 10.0.1.204 create and run Pods on their respective nodes
  5. Upgrade the deployment’s nginx service image

Let me add here:

The configuration of these applications and the status information of the current service are stored in ETCD. When performing operations such as kubectl get pod, the API Server will read these data from ETCD.

Calico will assign an IP to each pod, but please note that this IP is not fixed and will change as the pod is restarted.

Attachment: Node Management

Disable pods from being scheduled to this node

? kubectl cordon --delete-emptydir-data --ignore-daemonsets

Evict all pods on this node
kubectl drain
This command will delete all Pods on the node (except DaemonSet) and restart them on other nodes. This command is usually used when the node needs maintenance. Using this command directly will automatically call the kubectl cordon command. When the node maintenance is completed and kubelet is started, use kubectl uncordon to add the node to the kubernetes cluster.

Above we used the command line to create the deployment, but in production, many times we directly write the yaml configuration file and then create the service through kubectl apply -f xxx.yaml. We Now use the yaml configuration file to create the above deployment service

It should be noted that the indentation of the yaml file format is similar to the python syntax. The indentation format requirements are very strict. Any error will cause the creation to fail. Here I will teach you a practical technique to generate a standardized yaml configuration.

# Does this command look familiar? By the way, this is the command to create a deployment above. We add `--dry-run -o yaml` after it. --dry-run means that this command will not When actually executed in K8s, -o yaml will print out the trial run results in yaml format, so that we can easily obtain the yaml configuration.

# kubectl create deployment nginx --image=nginx --dry-run -o yaml
apiVersion: apps/v1 # <--- apiVersion is the version of the current configuration format
kind: Deployment #<--- kind is the resource type to be created, here is Deployment
metadata: #<--- metadata is the metadata of the resource, name is a required metadata item
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec: #<--- The spec part is the specification of the Deployment
  replicas: 1 #<--- replicas specifies the number of replicas, the default is 1
  selector:
    matchLabels:
      app: nginx
  strategy: {<!-- -->}
  template: #<--- template defines the template of the Pod, which is an important part of the configuration file
    metadata: #<--- metadata defines the metadata of the Pod, at least one label must be defined. The key and value of label can be specified arbitrarily
      creationTimestamp: null
      labels:
        app: nginx
    spec: #<--- spec describes the specifications of the Pod. This part defines the attributes of each container in the Pod. name and image are required.
      containers:
      - image: nginx
        name: nginx
        resources: {<!-- -->}
status: {<!-- -->}

Let’s try using this yaml file to create a deployment of nginx. Let’s delete the nginx we created using the command line first.

# To delete a resource from the command line on K8s, use the delete parameter directly.
# kubectl delete deployment nginx
deployment.apps "nginx" deleted

# You can see that the associated rs replica set has also been automatically cleared.
#kubectl getrs
No resources found in default namespace.

# The related pods are gone too.
# kubectl get pod
No resources found in default namespace.

Generate nginx.yaml file

# kubectl create deployment nginx --image=nginx --dry-run -o yaml > nginx.yaml
We noticed that there will be an alarm prompt when executing the above command... --dry-run is deprecated and can be replaced with --dry-run=client. Although it does not affect our ability to generate normal yaml configuration, if you look If you are unhappy, you can replace --dry-run with --dry-run=client according to the command prompt.
# Then we vim nginx.yaml and change the number of replicas: 1 to replicas: 2

# Start creating. We use apply for all subsequent commands to create resources based on yaml files.
# kubectl apply -f nginx.yaml
deployment.apps/nginx created

# View the created resources. There is a little trick for this. You can use, to view multiple resources at the same time. Separate, so that you can view multiple resources with one command.
# kubectl get deployment,rs,pod
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 2/2 2 2 116s

NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-f89759699 2 2 2 116s

NAME READY STATUS RESTARTS AGE
pod/nginx-f89759699-bzwd2 1/1 Running 0 116s
pod/nginx-f89759699-qlc8q 1/1 Running 0 116s

nginx production advanced yaml configuration

---
Kind: Service
apiVersion: v1
metadata:
  name: new-nginx
spec:
  selector:
    app: new-nginx
  ports:
    - name: http-port
      port: 80
      protocol: TCP
      targetPort: 80

---
# Ingress configuration of new version of k8s
apiVersion: networking.k8s.io/v1
Kind: Ingress
metadata:
  name: new-nginx
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
    nginx.ingress.kubernetes.io/configuration-snippet: |
      if ($host != 'www.boge.com' ) {
        rewrite ^ http://www.boge.com$request_uri permanent;
      }
spec:
  rules:
    - host: boge.com
      http:
        paths:
          - backend:
              service:
                name: new-nginx
                port:
                  number: 80
            path: /
            pathType: Prefix
    - host: m.boge.com
      http:
        paths:
          - backend:
              service:
                name: new-nginx
                port:
                  number: 80
            path: /
            pathType: Prefix
    - host: www.boge.com
      http:
        paths:
          - backend:
              service:
                name: new-nginx
                port:
                  number: 80
            path: /
            pathType: Prefix
#tls:
# - hosts:
# - boge.com
# - m.boge.com
# - www.boge.com
# secretName: boge-com-tls

# kubectl -n <namespace> create secret tls boge-com-tls --key boge.key --cert boge.csr

---
apiVersion: apps/v1
Kind: Deployment
metadata:
  name: new-nginx
  labels:
    app: new-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: new-nginx
  template:
    metadata:
      labels:
        app: new-nginx
    spec:
      containers:
#------------------------------------------------ -
      - name: new-nginx
        image: nginx:1.21.6
# image: nginx:1.25.1
        env:
          - name: TZ
            value: Asia/Shanghai
        ports:
        - containerPort: 80
        volumeMounts:
          - name: html-files
            mountPath: "/usr/share/nginx/html"
#------------------------------------------------ -
      - name: busybox
        image: registry.cn-shanghai.aliyuncs.com/acs/busybox:v1.29.2
# image: nicolaka/netshoot
        args:
        -/bin/sh
        - -c
        ->
           while :; do
             if [ -f /html/index.html ];then
               echo "[$(date + %F\ %T)] ${MY_POD_NAMESPACE}-${MY_POD_NAME}-${MY_POD_IP}" > /html/index.html
               sleep 1
             else
               touch /html/index.html
             fi
           done
        env:
          - name: TZ
            value: Asia/Shanghai
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
        volumeMounts:
          - name: html-files
            mountPath: "/html"
          - mountPath: /etc/localtime
            name: tz-config

#------------------------------------------------ -
      volumes:
        - name: html-files
          emptyDir:
            medium: memory
            sizeLimit: 10Mi
        - name: tz-config
          hostPath:
            path: /usr/share/zoneinfo/Asia/Shanghai

Let’s make a summary based on how these two resources are created:

Command-based approach:
1. Simple, intuitive and quick to use.
2. Suitable for temporary testing or experiments.

Configuration file-based approach:
1. The configuration file describes in detail all the requirements of the service, that is, the final state that the application will reach.
2. The configuration file provides a template for creating resources and can be deployed repeatedly.
3. Deployment can be managed like code.
4. Suitable for formal, cross-environment, and large-scale deployment.
5. This method requires familiarity with the syntax of the configuration file, which is somewhat difficult.

deployment mob battle (job)

Try to use the command line and yaml configuration to create the redis deployment service. At the same time, you can review the jobs behind the pod.