k8s’s Init Containers container implements code version upgrade release and deployment version rollback: practical operation version

Initialization containers in Pod: Init Containers

Theoretical premise of initContainers implementation: Containers in the same Pod share network, volume and other resources

Init Containers

In Kubernetes, an init container is a container that is started and executed before other containers in the same Pod. Its purpose is to perform initialization logic for the main application hosted on the Pod. For example, create necessary user accounts, perform database migrations, create database schemas, and more.

Init Containers design considerations

? They are always executed before other containers in the Pod. Therefore, they should not contain complex logic that takes a long time to complete. Startup scripts are usually small and concise. If we find that we are adding too much logic in the init container, we should consider moving part of it to the application container itself.

? Init containers are started and executed sequentially. The init container will not be called unless its predecessor container completes successfully. Therefore, if the startup task is long, consider breaking it into steps, with each step handled by the init container so that it knows which steps failed.

? If any init container fails, the entire Pod will be restarted (unless restartPolicy is set to Never). Restarting a Pod means re-executing all containers, including any init containers. Therefore, we may need to ensure that the startup logic can tolerate multiple executions without causing duplication. For example, if the database migration has already completed, executing the migration command again should be ignored.

? The init container is a good candidate for delaying application initialization until one or more dependencies are available. For example, if our application relies on an API that imposes an API request rate limit, it may take a while to receive a response from that API. Implementing this logic in the application container can be complex; it needs to be combined with health and readiness probes. A simpler approach is to create an init container that waits for the API to be ready before exiting successfully. The application container will start only after the init container has successfully completed its work.

? Init containers cannot use liveness and readiness probes like application containers. The reason is that they are destined to exit after successful startup, just like how Jobs and CronJobs behave.

? All containers in the same Pod share the same volume and network. We can use this feature to share data between the application and its init container.

Init Containers’ “request” and “restriction” behavior

As we just discussed, the init container is always started before other application containers on the same Pod. Therefore, the scheduler gives higher priority to the init container’s resources and limitations. This behavior must be thoroughly considered as it may lead to undesirable results. For example, if we have an init container and an application container, and the resources and limits of the init container are set higher than those of the application container, then it will be scheduled only if there is an available node that meets the requirements of the init container The entire Pod. In other words, even if there is an unused node to run the application container, if the init container has higher resource prerequisites that the node can handle, the Pod will not be deployed to that node. Therefore, when defining the requirements and limits of the init container, you should be as strict as possible. As a best practice, do not set these parameters higher than your application container unless absolutely necessary.

Application scenario 1: Code upgrade release

Take the java project war package + tomcat as an example. If we always package the code + tomcat as a mirror and publish it together, then this mirror will be relatively comparable, and our project updates are often only the code part. If we update and publish frequently, then every The second largest image transfer consumes resources and time. Therefore, you can consider implementing war as a separate image as initContainers, and the tomcat running platform as a separate container.

Use deployment plus InitContainer to implement code upgrade release and version rollback

My last blog post introduced the detailed installation steps for k8s clusters, so I won’t mention the installation process here.

First, prepare the Tomcat code. Here I have prepared two codes, one is the Tomcat initial page code, and the other is the zrlog project code.

Then edit the Dockerfile file in the code directory and package both codes into an image.

This is the code for the initial page

[root@server151 test]# ls
Dockerfile ROOT.war
[root@server151 test]# cat Dockerfile
FROM alpine
WORKDIR/code
COPY ./ROOT.war /tmp
[root@server151 test]# docker build -t tomcat:v0 .

This is the code of the zrlog project

[root@server151 web]# ls
Dockerfile ROOT.war
[root@server151 web]# cat Dockerfile
FROM alpine
WORKDIR/code
COPY ./ROOT.war /tmp
[root@server151 web]# docker build -t zrlog:v1 .

Then push both images to our private image warehouse

I won’t go into details about the process of pushing the image. My last blog post detailed the creation and use of private image warehouses. If you have forgotten, you can check it out.

The external link image transfer failed. The source site may have an anti-leeching mechanism. It is recommended to save the image and upload it directly.

After the code is ready, you can go to our k8s to build the deployment

First prepare an empty directory, and then enter this empty directory

Use kubectl to create a tomcat deployment template, so that we don’t have to write it completely by hand and only need to modify part of the code.

[root@server153 test]# kubectl create deployment tomcat-deploy --image tomcat --dry-run=server -o yaml > tomcat-deploy.yaml
[root@server153 test]# ls
tomcat-deploy.yaml

Then modify the template file to what we need

[root@server153 test]# cat tomcat-deploy.yaml
apiVersion: apps/v1
Kind: Deployment
metadata:
  labels:
    app: tomcat-deploy
  name: tomcat-deploy
  namespace:default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: tomcat-deploy
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: tomcat-deploy
    spec:
      #Create init container
      initContainers:
        #code mirror
      - image: www.test.com/mytest/tomcat:v0
        #init container name
        name: init
        #Copy the code to the anonymous data volume
        command: ["cp","-R","/tmp/ROOT.war","/www"]
        #Mount the anonymous data volume to the /www directory in the container
        volumeMounts:
        - mountPath: /www
          name: tomcat-volume
      #Create tomcat container
      containers:
      - image: oxnme/tomcat
        #restartstrategy
        imagePullPolicy: Always
        name: tomcat
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        #Mount the data volume to the tomcat code directory
        volumeMounts:
        - mountPath: /usr/local/tomcat/webapps/
          name: tomcat-volume
      dnsPolicy:ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 10

      #Create anonymous data volume
      volumes:
      - name: tomcat-volume
        emptyDir: {<!-- -->}

Then start building our deployment resources

[root@server153 test]# kubectl apply -f tomcat-deploy.yaml
deployment.apps/tomcat-deploy created

Note that the end of the running result is created.

Because our service has not been configured yet, it can only be accessed locally and cannot be accessed from the external network.

So next nginx will be used as a proxy to forward and view the results.

First take a look at the IP address of the pod you just created.

[root@server153 test]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tomcat-deploy-8457d967b5-ltwd6 1/1 Running 0 5m45s 10.2.1.36 server154 <none> <none>

Then download nginx

[root@server153 test]# yum install nginx -y
[root@server153 test]# vim /etc/nginx/nginx.conf

After starting nginx, go to the browser to check the situation.

[root@server153 test]# systemctl start nginx.service

The external link image transfer failed. The source site may have an anti-leeching mechanism. It is recommended to save the image and upload it directly.

You can see that it can be accessed normally

Now let’s simulate a code upgrade and replace it with another code image

Continue to modify our template file

The external link image transfer failed. The source site may have an anti-leeching mechanism. It is recommended to save the image and upload it directly.

After modification, update our deployment resources

[root@server153 test]# kubectl apply -f tomcat-deploy.yaml
deployment.apps/tomcat-deploy configured

Pay attention to the end of the running result, configured, indicating that we have just updated the configuration file, not created as before

[root@server153 test]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tomcat-deploy-555479bdf4-hw29f 1/1 Running 0 14s 10.2.1.37 server154 <none> <none>

Here we see that the name and IP of our pod have been changed, indicating that the update has been completed, the original pod has been deleted, and a new pod has been created.

If the IP changes, we have to go to nginx to modify the proxy address.

[root@server153 test]# vim /etc/nginx/nginx.conf

The external link image transfer failed. The source site may have an anti-leeching mechanism. It is recommended to save the image and upload it directly.

After restarting nginx, go to the browser to access it.

[root@server153 test]# systemctl restart nginx.service

You can see that our code has been updated to that of another project

It means that our code upgrade is completed

The power of deployment is not only code upgrade, but also rollback

When a new version of code goes online, how can you quickly roll back to the previous version if problems arise?

We didn’t update the code just now. Let’s take a look at the historical version of the current deployment.

[root@server153 test]# kubectl rollout history deployment tomcat-deploy
deployment.apps/tomcat-deploy
REVISION CHANGE-CAUSE
1 <none>
2 <none>

You can see that there are two versions. Let’s check the details of the first version.

I only took part of it, you can see the image used for the first version

[root@server153 test]# kubectl rollout history deployment/tomcat-deploy --revision=1
deployment.apps/tomcat-deploy with revision #1
Pod Template:
  Labels: app=tomcat-deploy
pod-template-hash=8457d967b5
  Init Containers:
   init:
    Image: www.test.com/mytest/tomcat:v0

Let’s look at the second version

[root@server153 test]# kubectl rollout history deployment/tomcat-deploy --revision=2
deployment.apps/tomcat-deploy with revision #2
Pod Template:
  Labels: app=tomcat-deploy
pod-template-hash=555479bdf4
  Init Containers:
   init:
    Image: www.test.com/mytest/zrlog:v1
    

You can see that they all match

For example, if something goes wrong with the version we just updated, do we need to roll back to the previous version?

[root@server153 test]# kubectl rollout undo deployment/tomcat-deploy --to-revision=1
deployment.apps/tomcat-deploy rolled back

Then look at the pod situation, you can see that our old version of the pod is coming up

The new version of the pod will not be stopped directly until the old version of the pod is up. This ensures that the business can run normally.

[root@server153 test]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tomcat-deploy-555479bdf4-hw29f 1/1 Running 0 21m 10.2.1.37 server154 <none> <none>
tomcat-deploy-8457d967b5-5gdwt 0/1 PodInitializing 0 21s 10.2.1.38 server154 <none> <none>
[root@server153 test]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tomcat-deploy-8457d967b5-5gdwt 1/1 Running 0 116s 10.2.1.38 server154 <none> <none>

If you look again after a while, you can see that we have returned to our old version of the pod, and the pod’s IP has become normal.

Then we modify the nginx proxy configuration

[root@server153 test]# vim /etc/nginx/nginx.conf

Restart nginx and go to the browser to access

[root@server153 test]# systemctl restart nginx.service

You can see that you have returned to the original version of the code.

Prove that there is no problem with upgrading or rolling back our code version

It can be seen that deploym is indeed very powerful.

I hope everyone has to help