Table of Contents
Declarative resource management approach
1. Commonly used kubernetes management commands
1) Check version information
2) View the resource object abbreviation
3) View cluster information
4) Configure kubectl automatic completion
5) Node node view logs
2. Resource management commands
1) Create resources
2) View resources
3) Delete resources
4) Enter the container in the Pod
5) View the logs of the Pod container
6) Expansion or reduction
7) Expose resources as new Services and provide services to the outside world
8) Update resources
9) Roll back resources
3. Type of service
4. How are Service and Pod related?
5. Service endpoint
6. Method of rolling update of Pod resources in the cluster
1) Blue-green release
2) Red and black release
3) Grayscale release (canary release)
4) Rolling release
Declarative resource management method
1. Commonly used kubernetes management commands
1) View version information
kubectl version
2) View the resource object abbreviation
kubectl api-resources
3) View cluster information
kubectl cluster-info
4) Configure kubectl automatic completion
source <(kubectl completion bash)
5) node node view log
journalctl -u kubelet -f
2. Resource management command
1) Create resources
Format: kubectl create [-n <namespace>] <resource type> <resource name> [options] or kubectl run <resource name> --image=mirror --replicas=number of replicas --port=container port Options: --image=mirror --replicas=number of replicas --port=container port example: //Create a namespace kubectl create ns heitui #ns is the resource type name, heitui is the resource name, customized //Create a Pod resource in the heitui namespace kubectl create -n heitui deployment nginx-ht --image=nginx #deployment is the name of the resource controller of the Pod, nginx-ht is the name of the custom-created Pod, --image=nginx specifies the image to use
2) View resources
Format: kubectl get [-n <namespace>] <resource type|all> [resource name] [-o wide|yaml|json] [-w] //View master node status kubectl get componentstatuses kubectl get cs //View namespace kubectl get namespace kubectl get ns //The role of command space: used to allow resources of the same type in different namespaces to have the same name. //View all resources in the default namespace kubectl get all [-n default]
Format: kubectl get [-n <namespace>] <resource type|all> [resource name] [-o wide|yaml|json] [-w] //View pod information in the namespace kube-public kubectl get pods -n heitui //View detailed information of a resource kubectl describe deployment nginx-ht -n heitui or kubectl describe pod nginx-ht-74cbf7dd5c -n heitui
3) Delete resources
Format: kubectl delete [-n <namespace>] <resource type> <resource name>|--all [--force --grace-period=0] --force --grace-period=0 Immediately terminate Pod running and forcefully delete resources //Delete the Pod resource of the heitui namespace kubectl delete -n heitui deployment nginx-ht kubectl -n heitui pods #View the pods in the heitui namespace //Delete heitui namespace kubectl delete ns heitui kubectl get ns #View all namespaces
4) Enter the container in the Pod
Format: kubectl exec -it [-n <namespace>] <Pod resource name> [-c container name] sh|bash //Enter the Pod container kubectl exec -it -n heitui nginx-ht-5dcc469667-dkxh8 bash
5) View the log of the Pod container
Format: kubectl logs [-n <namespace>] <Pod resource name> [-c container name] [-f] [-p] -f: View logs in real time -p: View the log before the Pod container is restarted //View the logs of the Pod container kubectl logs -n heitui nginx-ht-5dcc469667-dkxh8
6) Expansion or reduction
Format: kubectl scale [-n <namespace>] <deployment|statefulset> <resource name> --replicas=number of replicas //Expansion kubectl scale -n heitui deployment nginx-ht --replicas=3 //Shrink kubectl scale -n heitui deployment nginx-ht --replicas=1
7) Expose resources as a new Service and provide services to the outside world
Format: kubectl expose [-n <namespace>] deployment <resource name> --name <custom svc resource name> --type <svc resource type> --port <clusterIP port> --targetPort <container port> The svc resource types are ClusterIP|NodePort|LoadBalancer|ExternalName example: kubectl expose deployment nginx --port=80 --target-port=80 --name=nginx-service --type=NodePort
8) Update resources
Format: //Change template information kubectl set image deployment <deployment resource name> <container name>=<image name> //Change selector label kubectl set selector service <svc resource name> 'label key=value' example: //Update nginx version to version 1.15 kubectl set image deployment/nginx nginx=nginx:1.15 //In the dynamic listening pod state, since the rolling update method is used, a new pod will be generated first, and then an old pod will be deleted, and so on. kubectl get pods -w
9) Rollback resources
Format: kubectl rollout history deployment <deployment resource name> #View historical versions kubectl rollout undo deployment <deployment resource name> [--to-revision=N] # Without --to-revision=N, the default is to roll back to the previous version, otherwise the specified version will be rolled back. kubectl rollout status deployment <deployment resource name> #View rollback status
The reason why Kubernetes requires Service is that, on the one hand, the IP of a Pod is not fixed (the Pod may be rebuilt), and on the other hand, it is because there are always differences between a group of Pod instances There will be load balancing requirements.
3. Type type of service
1) ClusterIP
Provide a virtual IP within the cluster for Pod access (Service default type)
2) NodePort
Open a port on each node for external access, and the port of each node is the same. Programs outside the cluster access the Service in the cluster through NodeIP:Nodeport. Each port can only be one kind of service. Port The range can only be 30000~32767.
3) LoadBalancer
Using the cloud load device and service for mapping, external users can forward the request to the node node through the cloud load device, and then access the service through NodeIP:NodePort and forward it to other associated Pods.
4)ExternalName
It is equivalent to aliasing an external address. Pods in the cluster can access related external services through this service.
4. How are Service and Pod related?
Service associates the endpoint of the Pod by binding the label of the Pod through the Label Selector.
For container applications, Kubernetes provides a VIP (virtual IP)-based bridge method to access the Service, and then the Service redirects to the corresponding Pod.
5. Service endpoint
1) port
The port used by the clusterIP of the service
2) nodePort
The port defined in the NodePort type service is the port opened on each node, that is, the port used by nodeIP. The default range is 30000~32767.
3) targetPort
The service forwards the request sent to the port or nodePort to the Pod’s container port, which must be consistent with the containerPort.
4) containerPort
The container port specified when creating the Pod
Finally, the client inside the K8s cluster can access http://clusterIP:port ———–>podIP:containerPort (service provided by the Pod container)
Clients outside the K8s cluster can access http://nodeIP:nodePort——->podIP:containerPort (service provided by the Pod container)
6. The method of rolling update of Pod resources in the cluster
The rolling update methods of Pod resources include: blue-green release, red-black release, grayscale release (canary release), rolling release
1) Blue-green release
Definition: A strategy for service upgrades with minimal downtime
The two versions of the environment that need to be maintained are the “blue environment” and the “green environment”. Simply put, the “green environment” is the environment that is currently in use and can be used normally, while the “blue environment” is the one that needs to be updated. The environment in which the version is used.
Publishing process:
First, remove half of the service traffic from the load balancing list, and update the service version. After verifying that there are no problems with the new version, point the production environment to the blue environment, then upgrade the old version of the green environment, and finally add all traffic back Load balancing.
The upgrade process is as shown below:
The two environments are upgraded alternately. The old version will be retained for a certain period of time before being upgraded to facilitate rollback.
Advantages:
- The upgrade process requires no downtime and reduces user perception.
- Upgrade/rollback is fast.
Disadvantages:
- High resource costs
2) Red and black release
Definition: Similar to blue-green release, red-black release also completes the software version upgrade through two sets of environments. The current environment is called the red environment, and the new version environment is the black environment.
Publishing Process:
First, you need to apply for new resources to deploy the black environment, deploy a new version of the service in the black environment, and after the black environment deployment is completed, direct production traffic to the black environment at one time, and finally release the resources of the red environment.
The publishing flow chart is as follows:
3) Grayscale release (canary release)
Definition: Grayscale release is an incremental release. New and old versions provide services to users at the same time. The main purpose is to ensure system availability.
Publishing Process:
Upgrade a certain proportion of services in the existing environment. After upgrading to the new version, provide services together with other services of the old version. When the new version of the service does not cause errors, upgrade a certain proportion of services, and so on. , until all services are upgraded.
The publishing flow chart is as follows:
Features:
- The impact on user experience is small, and problems in the grayscale publishing process have a smaller impact.
- New version functions are gradually released, and the performance, stability and health status of the new version of the service can be gradually evaluated.
- Insufficient release automation may cause service interruption during release
4) Rolling release
Rolling release means that only one or more services are upgraded at a time. After the upgrade is completed, they are added to the production environment and this process is continued until all services in the cluster are upgraded to the new version.
Rolling release has several parameters
3 desired #Set the desired number of copies 25% max surge #Set the maximum number/proportion of copies allowed to be created during updates, rounded up 25% max unavailable #Set the maximum number/proportion of copies allowed to be destroyed during updates, rounded down