[K8S series] In-depth analysis of stateless services

Directory

preamble

1. No service introduction

1.1 Advantages

1.2 Usage Scenarios

1.3 Resource Type

1.4 Summary

2 Introduction

2.1 Deployment

scenes to be used:

2.2 Replica Sets

scenes to be used

2.3 pods

Example pod resource definition

2.4 service

Create a Deployment:

Create a Service:

Summarize:

2.5 ingress

Install the Ingress controller

Create Ingress Rules

Apply Ingress Rules

access service

Summarize:

2.6 configMap

3 Summary

Summary of usage scenarios

Summary of the difference between ReplicaSet and Deployment:


Preface

All lucky and coincidental things are either destined by the heavens, or a person is secretly working hard.

Article tag color description:

  • Yellow: important headlines
  • Red: used to mark conclusions
  • Green: Used to mark first-level arguments
  • Blue: Used to mark secondary arguments

Kubernetes (k8s) is a container orchestration platform that allows applications and services to be run in containers. Learn about stateless services today.

1. No service introduction

A stateless service is a service that does not need to save any data state, nor does it need to maintain any session information.

These services are usually designed to be scalable and highly available, as they can be deployed or removed at any point of time without impacting the availability of the application.

1.1 Advantages

The advantages of stateless services include:

  1. Scalability: Since stateless services do not need to maintain state information, they can easily scale horizontally to handle high traffic and load situations.

  2. High Availability: Stateless services can have multiple replicas to increase availability, and they can be deployed or removed at any point of time without affecting the availability of the application.

  3. Simplicity: Since stateless services do not need to maintain state information, they are often simpler and easier to maintain than stateful services.

1.2 Usage Scenario

The following is a detailed introduction to the usage scenarios of some stateless services:

  1. Web Applications: Stateless services are well suited for web applications such as web servers, API servers, load balancers, etc. These applications can have multiple replicas to increase availability, and they can scale horizontally to handle more requests.

  2. Batch jobs: Stateless services can also be used for batch jobs, such as data processing, log analysis, image processing, etc. These jobs can be started or stopped at any point in time and can run in parallel on multiple nodes for greater efficiency.

  3. Queue service: Stateless services can also be used for queue services, such as message queues, task queues, etc. These services can send messages or tasks to queues and have them consumed by multiple worker nodes to improve processing efficiency.

1.3 Resource Type

In Kubernetes, the types of stateless service resources mainly include the following:

  1. Deployment: A Deployment is a resource type used to create scalable stateless services. It allows to define the number of replicas and use the Rolling Update strategy to upgrade the service, thus maintaining the availability of the service.

  2. ReplicaSet: ReplicaSet is the underlying implementation of Deployment, which is used to ensure that a specified number of Pod copies are running at any point in time. When a Pod crashes or is deleted, ReplicaSet will automatically create a new Pod replica to ensure that the required number of Pod replicas remains unchanged.

  3. Pod: Pod is the smallest deployable unit in Kubernetes, used to run one or more containers. Usually, each Pod runs only one container, but in some cases, it may be necessary to run multiple containers in the same Pod.

  4. Service: Service is a resource type used to package multiple Pods together and provide a unified entry to access them. Service can use different types of services such as Cluster IP, NodePort or LoadBalancer to expose Pods.

  5. Ingress: Ingress is a k8s resource type used to route HTTP and HTTPS traffic within the cluster to the Service. Ingress can forward traffic to different services based on domain names or URL paths, and supports functions such as SSL/TLS termination and load balancing.

  6. ConfigMap: ConfigMap is a k8s resource type used to inject configuration information into stateless services. ConfigMap can store information such as key-value pairs and configuration files, and mount them into Pods as environment variables, command-line parameters, or configuration files.

Ingress, like Service and ConfigMap, are resource objects in Kubernetes. They are abstract concepts and do not contain any specific state information. An Ingress simply defines a set of rules, while a Service defines how to expose one or more Pods so that they can be accessed from inside or outside the cluster. Therefore, both Ingress and Service can be regarded as stateless Kubernetes resource objects.

In summary, using the stateless service resource type in Kubernetes makes it easier to manage and scale stateless services and improve their availability and reliability.

1.4 Summary

Stateless services are not suitable for all applications. Some applications may need to maintain session information, state information or persistent data, which requires the use of stateful services.

Therefore, when designing an application, you need to consider the specific requirements of the application to determine whether you should use stateless or stateful services.

2 Introduction

2.1 Deployment

The Deployment resource controller converts the user-defined Deployment objects into actual running Pods, and ensures that the state of the Pods is consistent with the user-defined state. If a Pod is deleted for some reason, the Deployment controller will start a new Pod to ensure that the specified number of Pods are running at any time.

Deployment is a commonly used resource type in Kubernetes to define a scalable set of Pods and ensure that they run in the cluster according to the required number of replicas.

Deployments can:

  • Make sure the application is always running with the required number of replicas in the cluster.
  • Automatically upgrade applications so new versions can be deployed.
  • Automatically roll back to previous versions in case of failures.
  • Update deployments with rolling upgrades, restarts, or pauses.

Here is an example of using a Deployment to create a scalable nginx service:

apiVersion: apps/v1
kind: Deployment #resource type
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      -name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

This example uses the Deployment resource to create a deployment named nginx-deployment. The deployment will create 3 replicas and use a label selector to identify the pods associated with the deployment.

The Pod for this example uses the nginx:latest image and exposes container port 80.

To deploy the example, save the YAML file as nginx-deployment.yaml and run the following command:

kubectl apply -f nginx-deployment.yaml

This command will create a Deployment named nginx-deployment in the cluster and automatically create 3 Pods related to it.

To update the deployed image, you can easily do so by editing the YAML file and running:

kubectl apply -f nginx-deployment.yaml

Kubernetes will automatically deploy the update and gradually upgrade all pods to the new version.

To sum up, the Deployment resource is a very useful resource type in Kubernetes, which makes the deployment and management of containerized applications easier and more reliable.

Usage scenario:

  1. Application Deployment: Use Kubernetes Deployment to easily deploy your applications and ensure they always perform the way you want. You can define the number of copies of your application, container images, storage and network settings, and more.

  2. Application upgrades and rollbacks: Use Kubernetes Deployments to easily upgrade and rollback your applications. You can define new versions of your application and control the rate and flow of upgrades. If a rollback is required, the application can be rolled back to an older version with a simple command.

  3. Horizontal Scaling: Use Kubernetes Deployments to automatically scale applications based on load. You can define horizontal scaling rules and let Kubernetes automatically scale your application.

  4. Service discovery and load balancing: Use Kubernetes Deployment to easily define service discovery and load balancing rules for your application. Kubernetes can automatically route traffic to the correct replica sets and keep them healthy at all times.

  5. Multi-environment deployment: Use Kubernetes Deployment to easily deploy applications across different environments such as development, test, and production. You can use different configuration files and policies to define different deployment rules, ensuring that your application always behaves the way you want in different environments.

2.2 ReplicaSet

Kubernetes’ ReplicaSet (replica set) is a controller (Controller) used to ensure that the number of copies of the Pod is always maintained at the specified number. If the number of replicas is found to be lower than expected, the ReplicaSet will automatically create new Pods. Conversely, if there are more replicas than expected, redundant Pods are deleted.

Here is an example definition of a ReplicaSet:

File name: example-rs.yaml

apiVersion: apps/v1
kind: ReplicaSet #resource type
metadata:
  name: example-rs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example
  template:
    metadata:
      labels:
        app: example
    spec:
      containers:
      -name: example
        image: nginx:latest

To explain this example line by line:

  • apiVersion: apps/v1: Indicates the apps group using API version v1.
  • kind: ReplicaSet: Indicates that we define a ReplicaSet object.
  • metadata: Contains the metadata of the ReplicaSet, such as name, label and comment.
  • name: example-rs: Specifies the name of the ReplicaSet.
  • spec: defines the specification of ReplicaSet, including the number of replicas, selectors and Pod templates.
  • replicas: 3: Specifies that the number of Pod copies we want to run is 3.
  • selector: Specifies the Pod label to be selected.
  • matchLabels: Specifies the matching labels, which are the same as those in the template.
  • template: defines the Pod template to be created.
  • metadata: defines Pod metadata, such as labels.
  • labels: Specifies the Pod label, which is the same as the label in the selector.
  • spec: Specifies the specification of the Pod, such as containers and container images.
  • containers: defines the containers in the Pod.
  • name: example: defines the name of the container.
  • image: nginx:latest: defines the container image.

Create this ReplicaSet:

kubectl apply -f example-rs.yaml

This will create a ReplicaSet called example-rs and ensure that there are 3 Pods tagged app=example running. If the number of Pods is less than 3, ReplicaSet will automatically create new Pods.

If you want to update the ReplicaSet, you can update the definition of the Pod template and perform a rolling update with the following command:

kubectl apply -f example-rs.yaml

This will create new pods to replace old ones up to the specified number of replicas. This process is called a rolling update.

To delete a ReplicaSet and all its Pods, the following command can be used:

kubectl delete rs example-rs

This command deletes the ReplicaSet named example-rs and all its Pods.

Use Scenario

The main usage scenarios of ReplicaSet include:

  1. Scale the application: By creating multiple Pod replicas and load balancing between them, the capabilities of the application can be easily scaled.

  2. Ensure high availability: When a Pod replica fails, ReplicaSet will automatically create a new Pod replica to replace it, thus ensuring high availability of the application.

  3. Manage rolling upgrades: By defining new versions of Pod templates in the ReplicaSet, rolling upgrades of the application can be achieved, minimizing application downtime.

  4. Ensure resource utilization: By controlling the number of pod replicas, you can ensure that resources are not wasted under light load conditions, while sufficient resources are available during peak load periods.

2.3 pod

In K8s, there are two main types of Pod resources:

  1. Pod resource definition: This is a YAML or JSON file that defines the pod specification. It describes the pod’s container and its related configuration, network, storage and other information.

  2. Pod runtime instance: This is the actual runtime instance created by the Pod resource definition. At runtime, K8s is responsible for managing the life cycle of Pod instances, the running status of containers, and the allocation of network and storage resources.

Pod resource definition example

Here is an example Pod resource definition:

apiVersion: v1
kind: Pod #resource type
metadata:
  name: my-pod
spec:
  containers:
    - name: my-container
      image: nginx
      ports:
        - containerPort: 80

2.4 service

Kubernetes (K8s) Service is an abstraction mechanism for providing network access to expose a set of Pods as a single network service. The service provides a virtual IP address and a DNS name, allowing clients to access applications in the Kubernetes cluster through these identifiers. Services also allow for load balancing and automatic fault tolerance to ensure that even if one or more pods fail, the service remains available.

In Kubernetes, a Service object is a Kubernetes API object that can be created, managed, and deleted using the Kubernetes API or the kubectl command-line tool.

When creating a Service object, you need to specify a label selector to determine which Pods to use as the backend. Services can also route requests to backend Pods using different load balancing algorithms such as round robin or IP hashing.

The following is a simple Kubernetes Service example:

Create a Deployment:

apiVersion: apps/v1
kind: Deployment #resource type
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      -name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

Create a Service:

apiVersion: v1
kind: Service #resource type
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  type: ClusterIP
  ports:
  - name: http
    port: 80
    targetPort: 80

This example creates a Deployment named nginx-deployment that runs 3 Nginx Pods.

Next, it creates a Service called nginx-service, using selector to specify that the backend Pods selector is set to app: nginx .

The Service uses the default ClusterIP type and routes all incoming traffic to port 80.

In this example, targetPort is set to 80 because the backend Pods are also listening on port 80.

Summary:

At this point, the Nginx service can be accessed through the virtual IP address and DNS name inside the Kubernetes cluster.

For example, in a Pod in a Kubernetes cluster, the service can be accessed using the name nginx-service.

Note: The above examples are for reference only, the specific implementation may vary depending on the Kubernetes version and configuration. Please refer to the Kubernetes documentation for more details and best practice recommendations.

2.5 ingress

Kubernetes Ingress is an API object that allows management of HTTP and HTTPS routing within the cluster. Ingress acts as a load balancer, routing traffic to services within the cluster. It can also provide TLS termination and name-based virtual hosting.

In Kubernetes, an Ingress controller is an entity that runs Ingress rules. Many different Ingress controllers are available, such as Nginx, Traefik, Istio, etc., which have different capabilities and advantages and disadvantages.

Here is an example using Kubernetes Ingress:

Install Ingress Controller

Different Ingress controllers may be installed in different ways. Here we take Nginx as an example:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml

Create Ingress Rule

Create an Ingress object to define routing rules. The following example will specify that all HTTP requests from the /example path will be routed to the service named example-service:

apiVersion: networking.k8s.io/v1
kind: Ingress #resource type
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /example
        pathType: Prefix
        backend:
          service:
            name: example-service
            port:
              number: 80

Apply Ingress Rules

Apply the Ingress rule to the Kubernetes cluster with the following command:

kubectl apply -f example-ingress.yaml

Access Services

Services can be accessed using routes defined by Ingress rules.

http://ip address/example

Summary:

Using Kubernetes Ingress, you can easily set routing rules for services in the cluster and distribute traffic to corresponding services.

At the same time, the selection of the Ingress controller also has a certain impact on performance and functions, and it is necessary to select a suitable controller according to the specific situation.

2.6 configMap

A ConfigMap for Kubernetes (k8s) is a Kubernetes object for storing non-sensitive configuration data. ConfigMap can contain key-value pairs, files or plain text data, and can be used in the container through environment variables, command line parameters or mount paths.

Here is a sample YAML file for a k8s ConfigMap:

apiVersion: v1
kind: ConfigMap #resource type
metadata:
  name: my-configmap
data:
  CONFIG_VAR1: value1
  CONFIG_VAR2: value2
  CONFIG_FILE: |-
    This is the content of a file in the ConfigMap.
    It can be accessed in a container as a volume.

The above example defines a ConfigMap object named my-configmap, which contains three key-value pairs: CONFIG_VAR1, CONFIG_VAR2 and CONFIG_FILE. CONFIG_FILE contains some plain text data, which can be accessed by mounting in the container.

Here is an example YAML file using ConfigMap:

apiVersion: v1
kind: Pod #resource type is pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: nginx
    env:
    - name: VAR1
      valueFrom:
        configMapKeyRef:
          name: my-configmap
          key: CONFIG_VAR1
    - name: VAR2
      valueFrom:
        configMapKeyRef:
          name: my-configmap
          key: CONFIG_VAR2
    volumeMounts:
    - name: config-volume
      mountPath: /etc/config
      readOnly: true
  volumes:
  - name: config-volume
    configMap:
      name: my-configmap

The above example defines a Pod object named my-pod that contains a container named my-container .

The container uses the CONFIG_VAR1 and CONFIG_VAR2 values from my-configmap and changes the CONFIG_VAR2 values from my-configmap The code>CONFIG_FILE file is mounted to the /etc/config directory of the container.

In general, ConfigMap is a very useful Kubernetes object that can be used to store configuration data and pass it to the container for use at runtime.

3 Summary

Summary of usage scenarios

The following are some business scenarios suitable for using stateless services:

  1. Web Applications: Web applications are usually stateless and their behavior depends only on user requests and input. Stateless services can help you easily scale and improve the reliability of your web applications.

  2. Batch jobs: Batch jobs are usually independent tasks whose execution does not require storing state information. Stateless services can help you run large-scale batch jobs quickly.

  3. Computationally Intensive Applications: Computationally intensive applications typically require significant computing resources to process data. Stateless services can help you quickly scale your application to handle more computing tasks.

  4. Microservice Architecture: In a microservice architecture, services are usually designed to be stateless. This is because stateless services can be scaled more easily, and it is easier to achieve high availability and load balancing.

  5. Media streaming services: Media streaming services are usually stateless because they do not need to store state information. Stateless services can help you quickly scale media streaming services to handle more requests.

Summary of the difference between ReplicaSet and Deployment:

Kubernetes (k8s) ReplicaSet and Deployment are two commonly used Kubernetes resource objects, both of which are used to manage the creation, scaling, scaling, and updating of Pods.

Usage scenarios of Deployment and ReplicaSet:

  • Deployment: The Deployment resource object is used to manage application version updates. When you need to release a new version of your application, you can use the Deployment object to create a new ReplicaSet, and then Kubernetes will gradually reduce the number of Pods from the old version of the ReplicaSet to the new version of the ReplicaSet. In the event of a failure, the Deployment automatically rolls back to the last available version.
  • ReplicaSet: The ReplicaSet resource object is used to manage the number of Pods. If you need to scale up or down the number of Pods, you can use a ReplicaSet object to define the desired number of replicas and let Kubernetes automatically create or delete Pods to reach the number of replicas you define.

The difference between Deployment and ReplicaSet:

  • Deployment is the upper controller of ReplicaSet. It adds more update, rollback and version management functions on the basis of ReplicaSet. Therefore, Deployment is more suitable for application version management.
  • ReplicaSet is responsible for ensuring that the number of Pods reaches the specified number of replicas, but it does not care about the version number of Pods. Therefore, ReplicaSet is more suitable for controlling the number of Pods of the same version to ensure availability and fault tolerance.

The knowledge points of the article match the official knowledge files, and you can further learn relevant knowledge Cloud native entry skill tree Container arrangement (production environment k8s)kubelet, kubectl, kubeadm three-piece set 11207 people is studying systematically