Cordon sets the node to be unschedulable, uncordon restores the node to be schedulable, drain is used for maintenance, and taint sets the taint of the updated node.

Foreword

Environment: k8s 1.22.15, docker
I have been confused about the cordon, drain, and taint commands, so let’s distinguish them.

cordon sets the node to be unschedulable

The kubectl cordon NODE command sets the node to be unschedulable. It is just that the node is unschedulable. Existing pods will not be evicted, and new pods will not be scheduled to the node.
Usage:
  kubectl cordon NODE [options]
Demo example:
#Set node2 to be unschedulable
kubectl cordon node2
#Check the node information. Now node2 has been set as non-schedulable, so the newly created pod will not be scheduled to the node2 node, and the pods that already existed on the node2 node will not be evicted.
[root@matser ~]# kubectl describe node node2 | grep Unschedulable
Unschedulable: true
[root@matser ~]# kubectl get node node2
NAME STATUS ROLES AGE VERSION
node2 Ready,SchedulingDisabled node 353d v1.22.15
#We found that when a node is set to be unschedulable, k8s will automatically be tainted, as follows:
Taints: node.kubernetes.io/unschedulable:NoSchedule
Unschedulable: true

#Use the path command to also set the node to be unschedulable
kubectl patch node node2 -p '{"spec":{"unschedulable": true}}'
#Use the path command to restore the node to be schedulable
kubectl patch node node2 -p '{"spec":{"unschedulable": false}}'

#In addition to using the cordon and patch commands above to set the node to be unschedulable, you can also directly modify the node resources to make the node unschedulable.
#There is no unschedulable parameter by default. Add unschedulable: true under spec to set the node to be unschedulable.
kubectl edit node node2
#To remove unschedulable, you can also directly kubectl edit node node2, find unschedulable and set it to false.

uncordon recovery node can be scheduled

The cordon command sets the node to be unschedulable, and the corresponding command to restore the node to be schedulable is uncordon.

Usage:
kubectl uncordon NODE [options]
Demo example:
#Restore node2 node to be schedulable
kubectl uncordon node2

drain shutdown maintenance

The drain command is used to shut down nodes for maintenance and take them offline. Its principle is also achieved by setting the node to be unschedulable and evicting all pods of the node.

Usage:
  kubectl drain NODE [options]

#Evict pod. By default, when the node has a pod of ds, an error will be reported during command execution, so you need to add the --ignore-daemonsets=true parameter.
 kubectl drain node2 --ignore-daemonsets=true
--ignore-daemonsets
#Force eviction pod
kubectl drain node2 --force
#Set a 15-minute grace period for eviction
kubectl drain node2 --ignore-daemonsets=true --grace-period=900

#We found that when using the drain command, k8s will set the node to unschedulable by default.
Taints: node.kubernetes.io/unschedulable:NoSchedule
Unschedulable: true

#So above, when we need to take a node offline for maintenance, we can do this:
#First set the node to be unschedulable to prevent new pods from being scheduled to the node. At this time, the pods on the node will not be evicted.
kubectl cordon node2
#Use the drain command to expel the node pod at this time
kubectl drain node2 --ignore-daemonsets=true
#After node maintenance, it needs to be brought back online to resume scheduling.
kubectl uncordon node2

taint sets the taint on the update node

taints, taint is key-value attribute data defined on the node node, which is used to make the node node refuse to schedule pods to run on it, unless the pod object has the tolerance to accept node taint.
Taint is a scheduling strategy in pod scheduling. Taint acts on a node. When a node is tainted, it indicates whether the node allows pod scheduling.
The format of the stain: key=value:effect, where key and value are the tags of the stain, which can be drawn up by yourself. Effect describes the function of the stain. Effect supports the following three options:
PreferNoSchedul: kubernetes will try to avoid scheduling pods on nodes with this stain unless there are no other nodes to schedule;
NoSchedule: Kubernetes will not schedule pods to nodes with this stain, but it will not affect existing pods on the current node;
NoExecute: kubernetes will not schedule the pod to the node with this stain, and will also expel the existing pod on the node;

Usage:
  kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options]

Demo example:
#Set the taint, specify the label as dedicated=special-user, the strategy is NoSchedule, and update the strategy if the label already exists
kubectl taint nodes node2 dedicated=special-user:NoSchedule
#Remove the NoSchedule stain whose key is dedicated
kubectl taint nodes node2 dedicated:NoSchedule-
#Remove all stains whose key is dedicated
kubectl taint nodes node2 dedicated-
#View stain
[root@master ~]# kubectl describe nodes node2 | grep Taints
Taints: dedicated=special-user:NoSchedule

#Above, the setting of train will not affect the Unschedulable of the node, as shown below; because stains are just stains, and when the pod is tolerated, it can still be scheduled to the node.
[root@matser ~]# kubectl describe node node2 | grep -A1 -i Taints
Taints: dedicated=special-user:NoSchedule
Unschedulable: false
[root@matser ~]# kubectl get node node2
NAME STATUS ROLES AGE VERSION
node2 Ready node 353d v1.22.15

#Note that taint cannot replace drain, as follows
kubectl taint nodes node2 dedicated=special-user:NoExecute
#Test that the normal pod on node2 has been evicted, but the pod of ds has not been evicted because the pod has NoExecute tolerance.
[root@matser ~]# kubectl describe node node2 | grep -A1 -i Taints
Taints: dedicated=special-user:NoExecute
Unschedulable: false
[root@matser ~]# kubectl get node node2 #The node can still be scheduled. As long as there is tolerance in the pod, it can be scheduled to the node2 node
NAME STATUS ROLES AGE VERSION
node2 Ready node 353d v1.22.15

Summary

1. The cordon command is used to set the node to be unschedulable.
#Set the node to be non-schedulable and will not evict existing pods
kubectl cordon node2
#Recovery node can be scheduled
kubectl uncordon node2

2. The cordon command is used for node offline shutdown maintenance.
#First set the node to be unschedulable to prevent new pods from being scheduled to the node. At this time, the pods on the node will not be evicted.
kubectl cordon node2
#Use the drain command to expel the node pod at this time
kubectl drain node2 --ignore-daemonsets=true
#After node maintenance, you need to go online again to restore schedulability.
kubectl uncordon node2

summary:
Using the cordon command to set the node to be unschedulable or the cordon command to expel the node pod and take it offline for maintenance will stain the node, as follows:
Taints: node.kubernetes.io/unschedulable:NoSchedule
Unschedulable: true

3. taint sets the taint of the update node
The format of the stain: key=value:effect, key and value are the tags of the stain, which can be drawn up by yourself. Effect describes the function of the stain. Effect supports the following three options:
PreferNoSchedul: kubernetes will try to avoid scheduling pods on nodes with this stain unless there are no other nodes to schedule;
NoSchedule: Kubernetes will not schedule pods to nodes with this stain, but it will not affect existing pods on the current node;
NoExecute: kubernetes will not schedule the pod to the node with this stain, and will also expel the existing pod on the node;

#Set the stain, specify the label as dedicated=special-user, and the strategy as NoSchedule
kubectl taint nodes node2 dedicated=special-user:NoSchedule
#Remove the NoSchedule stain whose key is dedicated
kubectl taint nodes node2 dedicated:NoSchedule-
#Remove all stains whose key is dedicated
kubectl taint nodes node2 dedicated-
#Set node2 taint, the strategy is NoExecute
kubectl taint nodes node2 dedicated=special-user:NoExecute
#Note, don’t confuse the node as unschedulable. If there is a tainted node on the node, it can still be scheduled. As long as there is tolerance in the pod, it can be scheduled to the node2 node.
[root@matser ~]# kubectl describe node node2 | grep -A1 -i Taints
Taints: dedicated=special-user:NoExecute
Unschedulable: false #Don’t confuse nodes as being unschedulable
[root@matser ~]# kubectl get node node2
NAME STATUS ROLES AGE VERSION
node2 Ready node 353d v1.22.15