Kubernetes Label && Selector

Kubernetes-BestPractices

Author:rab

Directory

    • Preface
    • 1. Labels
      • 1.1 Definition
      • 1.2 Case
        • 1.2.1 Node label
        • 1.2.2 Object tags
    • 2. Selector
      • 2.1 Node Selector
      • 2.2 Service Selector
      • 2.3 Deployment Selector
      • 2.4 StatefulSet Selector
      • 2.5 DaemonSet Selector
      • 2.6 HorizontalPodAutoscaler Selector
      • 2.7 NetworkPolicy Selector
      • 2.8 Pod Affinity and Anti-Affinity Rules
    • Summarize

Foreword

In Kubernetes, Label and Selector are two key concepts for identifying and selecting objects. They are very useful when defining and managing relationships and associations between resource objects.

1. Labels

1.1 Definition

Label is a key-value pair that can be attached to K8s objects, such as Pod, Service, Deployment, etc., allowing you to customize the properties of the object. For example, you can add a label to a Pod to indicate its purpose, environment, or owner (e.g. app=web, environment=production, owner=ops). Label is used to identify and classify objects, as well as filter and select these objects in different contexts, but does not have a direct impact on the object’s behavior.

1.2 Case

1.2.1 Node label

By default, Scheduler will schedule Pods to all available Work nodes, but in some cases we need to deploy Pods to specified Work nodes, such as deploying Pods with a large amount of disk I/O to work nodes with SSDs. Come up to ensure its I/O.

kubectl label node k8s-work2 disktype=ssd

How to view the label of a node?

# View all node labels
kubectl get node --show-labels

# View the specified node label
kubectl get node k8s-work2 --show-labels

image-20231026143640117

Then how to query the corresponding resources according to the tag?

kubectl get node -l disktype=ssd

# This command can find out which nodes have the disktype=ssd label

image-20231026143852724

1.2.2 Object tag

In addition to labeling nodes, we can also label objects such as Pod, Service, and Deployment. Generally, we will directly specify the label of the corresponding resource in the yaml file. Of course, you can also add labels to objects through commands.

1. Command line mode

kubectl label svc mynginx -n yournamespace env=canary version=v1

# Add two labels to the service resource mynginx, namely env=canary label and version=v1 label.

2. Yaml file method

apiVersion: apps/v1
Kind: Deployment
metadata:
  name: example-deployment
spec:
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-container
        image: nginx:latest

Define labels at labels:. You can also define multiple sets of labels. The label in this case is app: my-app.

How to view the label of a node?

# View all service tags
kubectl get svc -n yournamespace --show-labels

# View the specified service tag
kubectl get svc mynginx -n yournamespace --show-labels

Then how to query the corresponding resources according to the tag?

# View resources with the version=v1 tag in all namespaces
kubectl get svc --all-namespaces -l version=v1

# View resources with the version=v1 tag under the specified namespace
kubectl get svc -n yournamespace -l version=v1

2. Selector

In Kubernetes, Selectors are used to filter and select resource objects based on labels (Labels). Selectors are a key concept for defining associations and dependencies, and establishing relationships between different objects.

2.1 Node Selector

This selector is used to deploy the Pod to the host node with the label you specify. The specific yml configuration is as follows.

apiVersion: apps/v1
Kind: Deployment
metadata:
  name: nginx
  labels:
    app: demo
spec:
  replicas: 4
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: nginx
        image: nginx:1.20.0
        ports:
        - containerPort: 80
      nodeSelector:
        disktype: ssd

In this case, my Nginx service will be deployed on the node with the node label disktype: ssd.

2.2 Service Selector

In Kubernetes, Service resources usually use Selectors to select the Pods associated with them. Services allow you to route requests to Pods that match specific labels. For example, you can create a Service and use a Label Selector to associate it with a specific application or version, and then access those Pods through the Service.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx
  namespace: myweb-2
  labels:
    app: stateful
spec:
  serviceName: myweb-2
  replicas: 5
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: demo-2
  template:
    metadata:
      labels:
        app: demo-2
    spec:
      containers:
      - name: nginx
        image: nginx:1.21.4
        ports:
        - containerPort: 80
---
apiVersion: v1
Kind: Service
metadata:
  name: nginx-srv
  namespace: myweb-2
spec:
  selector:
    app: demo-2
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30334
  type: NodePort

Looking at the Service section spec.selector at the end of the yml file, app: demo-2 refers to the Pod resource associated with the Service, as long as it has app: demo-2 label are all owned by this Service “Managed”.

2.3 Deployment Selector

In ReplicaSet and Deployment configurations, Selectors are used to determine which Pods will be managed by these controllers. When you create a ReplicaSet or Deployment, you can specify their Selector to ensure that they manage Pods with specific labels.

apiVersion: v1
kind: Namespace
metadata:
  name:myweb
  labels:
    name: ops
---
apiVersion: apps/v1
Kind: Deployment
metadata:
  name: nginx
  namespace:myweb
  labels:
    app: webdemo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: nginx
        image: nginx:1.21.4
        ports:
        - containerPort: 80
---
apiVersion: v1
Kind: Service
metadata:
  name: nginx-srv
  namespace:myweb
spec:
  selector:
    app: demo
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30333
  type: NodePort

Look at the Deployment part spec.selector.matchLabels in the middle of the yml file. app: demo refers to the Pod resources associated with the Deployment. These Pods belong to this Deployment " Management".

2.4 StatefulSet Selector

StatefulSet Selector is similar to Deployment Selector (no case demonstration here). It also uses Selector to select Pods to be managed. The difference is that StatefulSets manage stateful applications, typically requiring each Pod to be named and tagged in a specific order (autocommands).

2.5 DaemonSet Selector

Also similar to the Deployment Selector (no case demonstration here), in the DaemonSet, the Selector is used to select the Pods running on each node. You can use a Label Selector to determine on which nodes a DaemonSet’s Pods run.

2.6 HorizontalPodAutoscaler Selector

HorizontalPodAutoscaler (HPA) is used to automatically scale the number of Pod replicas. It uses a Label Selector to select Pods to automatically scale to meet the performance needs of the application based on CPU usage or other metrics.

For example, there is a Deployment for running applications. You want to automatically scale the number of Pods based on CPU usage. In the Deployment’s configuration, you can use a Selector to select these Pods.

apiVersion: apps/v1
Kind: Deployment
metadata:
  name: my-app-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app-container
        image: my-app:latest

Next, you create a HorizontalPodAutoscaler resource to configure automatic scaling.

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    Kind: Deployment
    name: my-app-deployment
  minReplicas: 2
  maxReplicas: 5
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 50

Field description:

  • scaleTargetRef field to determine the target Deployment to autoscale.
  • minReplicas and maxReplicas fields to determine the range for the number of Pod replicas.
  • metrics field to configure the metrics to monitor. Here, we configured the CPU usage and set a target average utilization of 50%.

In this example, the HPA’s Selector is determined by the target Deployment in scaleTargetRef because it selects the Pod replicas to be automatically scaled. To automatically expand other types of resources (such as statefulset resources), you can adjust the configuration of scaleTargetRef as needed.

2.7 NetworkPolicy Selector

In NetworkPolicy, Selector is used to define network policies to allow or deny communication between Pods of specific labels, which helps enforce network security policies. Here is a simple NetworkPolicy example that demonstrates how to use a Selector to control traffic:

Environment:There are two applications, one is the frontend application and the other is the backend application. You want to frontend code> has access to backend, but does not want frontend to be able to communicate with other applications.

  • backend program

    apiVersion: apps/v1
    Kind: Deployment
    metadata:
      name: backend-app
    spec:
      selector:
        matchLabels:
          app: backend
      template:
        metadata:
          labels:
            app: backend
        spec:
          containers:
            - name: backend-container
              image: backend-image:latest
    
  • frontend program

    apiVersion: apps/v1
    Kind: Deployment
    metadata:
      name: frontend-app
    spec:
      selector:
        matchLabels:
          app: frontend
      template:
        metadata:
          labels:
            app: frontend
        spec:
          containers:
            - name: frontend-container
              image: frontend-image:latest
    
  • Create NetworkPolicy (implement traffic control)

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-frontend-to-backend
    spec:
      podSelector:
        matchLabels:
          app: frontend
      policyTypes:
        -Ingress
      ingress:
        - from:
          - podSelector:
              matchLabels:
                app: backend
    

    In the above example, we created a NetworkPolicy called allow-frontend-to-backend. This NetworkPolicy allows the frontend Pod to access the backend Pod with Label app: backend.

    This NetworkPolicy will ensure that only frontend Pods can communicate with backend Pods, and other Pods will not be allowed to communicate with them.

2.8 Pod Affinity and Anti-Affinity Rules

In Kubernetes, Pod affinity (Affinity) and anti-affinity (Anti-Affinity) rules are used to define the strategy of how to position Pods to nodes in node scheduling. These rules can be used to optimize node selection to meet performance, availability, or other needs.

For example, we have a distributed application in which there are two types of services: frontend (frontend) and backend (backend). We want to deploy the frontend and backend Pods to different nodes to ensure availability. At this point we can use Pod anti-affinity rules.

  • Create Pod resources and label them

    In the example below, both the frontend Pod and the backend Pod are assigned different labels.

    apiVersion: v1
    Kind: Pod
    metadata:
      name: frontend-pod
      labels:
        app: frontend
    spec:
      containers:
        - name: frontend-container
          image: my-frontend-image:latest
    ---
    apiVersion: v1
    Kind: Pod
    metadata:
      name: backend-pod
      labels:
        app: backend
    spec:
      containers:
        - name: backend-container
          image: my-backend-image:latest
    
  • Create a Pod Anti-Affinity rule

    To ensure that frontend and backend Pods are not scheduled on the same node.

    apiVersion: apps/v1
    Kind: Deployment
    metadata:
      name: my-app-deployment
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: frontend
      template:
        metadata:
          labels:
            app: frontend
        spec:
          affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                - labelSelector:
                    matchExpressions:
                      -key: app
                        operator: In
                        values:
                          -frontend
                  topologyKey: kubernetes.io/hostname
          containers:
            - name: frontend-container
              image: my-frontend-image:latest
    

    Simple field explanation:

    affinity: Define Pod affinity (Affinity) rules to control how Pods are scheduled to nodes.

    podAntiAffinity: Define anti-affinity rules to ensure that Pods are not scheduled to the same node as other Pods with the same label.

    requiredDuringSchedulingIgnoredDuringExecution: Specifies rule requirements to be enforced during scheduling.

    labelSelector: Defines the label selector to match.

    matchExpressions: Contains a set of tag matching conditions.

    key: app: The key of the tag is “app”.

    operator: In: Use the “In” operator to match the tag value in the given value list.

    values: [frontend]: The matching value is “frontend”.

    topologyKey: kubernetes.io/hostname: Specifies the topology domain selector for the node. Here, the node hostname is used as the selector for the topology domain.

Summary

1. Labels

  • Identify applications and components: Labels can be used to identify and distinguish different applications and components.

    By adding appropriate Labels to resource objects such as Pods, Services, Deployments, etc., you can clearly understand which resources belong to which application or service.

  • Environment Partitioning: Labels can be used to group resource objects into different environments, such as development, test, and production.

    This makes it possible to manage and deploy the same application in different environments while ensuring separation and isolation of resources.

  • Version management: Labels can be used to identify different versions of an application or service.

    This is useful for rolling upgrades or rollbacks between different versions.

  • Owner and application hierarchies: Labels can be used to track the owner of a resource object and create application hierarchies.

    For example, you can use Labels to identify which Deployment manages which ReplicaSet, and which ReplicaSet manages which Pods.

  • Querying and filtering: Labels can be used to perform querying and filtering operations to find resource objects that match specific criteria.

    This is useful for performing operations, monitoring, or debugging.

2. Selector

  • Service selector: When creating a Kubernetes Service, you can use a Selector to choose which Pods to route traffic to.

    Services can be associated to Pods with specific labels to provide load balancing and service discovery.

  • Deployment and ReplicaSet selectors: You can use Selectors to determine which Pods are to be managed by these controllers.

    Controllers can be easily associated with Pods with specific labels.

  • StatefulSet selector: StatefulSet is used to manage stateful applications, such as databases.

    You can use a Selector to select Pods managed by a StatefulSet to ensure that they meet specific label requirements.

  • NetworkPolicy selector: When creating a NetworkPolicy, you can use a Selector to define which network communications between Pods are allowed or denied.

    This helps implement network security policies.

  • Pod affinity and anti-affinity rules: Pod affinity and anti-affinity rules allow us to define how Pods are scheduled to nodes based on label selectors.

    Improve availability and reliability by defining rules to ensure that Pods with specific labels are not scheduled to the same node.

  • DaemonSet Selector: When creating a DaemonSet, you can use the Selector to select which nodes the DaemonSet’s Pods will run on.

    This is useful for running system-level tasks on specific nodes.

  • Horizontal Pod Autoscaler (HPA) Selector: HPA uses Selectors to select Pods in a target Deployment or ReplicaSet and automatically scale up or down the number of Pod replicas based on metrics such as CPU usage.

  • Custom controller selector: If you create a custom controller, you can use a Selector to select the resource objects your controller manages and how to associate them with your controller.

-END