Kubernetes yaml file

Table of Contents

yaml file

Detailed explanation of Pod yaml file

Detailed explanation of deployment.yaml file

Detailed explanation of Service yaml file


Documents

Kubernetes supports YAML and JSON formats to manage resource objects
JSON format: mainly used for message transmission between API interfaces
YAML format: used for configuration and management. YAML is a concise, non-markup language with user-friendly content format and easier to read.
 
YAML syntax format:
●Case sensitive
●Use indentation to indicate hierarchical relationships
●Tab key indentation is not supported, only spaces are used for indentation
●The number of indented spaces is not important, as long as elements at the same level are aligned to the left, usually two spaces are indented at the beginning
●Indent a space after a symbol character, such as colon, comma, short horizontal bar (-), etc.
●"---" indicates YAML format, the beginning of a file, used to separate files
●"#" indicates comment
 
//View api resource version label
kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1 #If it is a business scenario, it is generally preferred to use apps/v1
apps/v1beta1 #The words "beta" represent the test version and are not used in the production environment.
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
 
 
//Write a yaml file demo
mkdir /opt/demo
cd demo/
 
vim nginx-deployment.yaml
apiVersion: apps/v1 #Specify the api version label
kind: Deployment #Define the type/role of the resource. Deployment is the copy controller. The resource type here can be Deployment, Job, Ingress, Service, etc.
metadata: #Define metadata information of resources, such as resource name, namespace, tags and other information
  name: nginx-deployment #Define the name of the resource, which must be unique in the same namespace
  labels: #Define Deployment resource labels
    app: nginx
spec: #Define the parameter attributes required by the deployment resource, such as whether to restart the container when the container fails.
  replicas: 3 #Define the number of replicas
  selector: #Define label selector
    matchLabels: #Define matching labels
      app: nginx #Need to be consistent with the labels defined by .spec.template.metadata.labels
  template: #Define the business template. If there are multiple copies, the attributes of all copies will be matched according to the relevant configuration of the template.
    metadata:
      labels: #Define the labels that the Pod copy will use, which must be consistent with the labels defined by .spec.selector.matchLabels
        app: nginx
    spec:
      containers: #Define container properties
      - name: nginx #Define a container name, a - name: define a container
        image: nginx:1.15.4 #Define the image and version used by the container
        ports:
        - containerPort: 80 #Define the external port of the container
 
//Create resource object
kubectl create -f nginx-deployment.yaml
 
//View the created pod resources
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-deployment-d55b94fd-29qk2 1/1 Running 0 7m9s 172.17.36.4 192.168.80.12 <none>
nginx-deployment-d55b94fd-9j42r 1/1 Running 0 7m9s 172.17.36.3 192.168.80.12 <none>
nginx-deployment-d55b94fd-ksl6l 1/1 Running 0 7m9s 172.17.26.3 192.168.80.11 <none>
 
 
//Create service service to provide external access and test
vim nginx-service.yaml
apiVersion: v1
Kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx
 
//Create resource object
kubectl create -f nginx-service.yaml
 
//View the created service
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 16d
nginx-service NodePort 10.0.0.119 <none> 80:35680/TCP 14s
 
//Enter nodeIP:nodePort in the browser to access
http://192.168.80.11:35680
http://192.168.80.12:35680
 
-------------------------------------------------- ----------------------------------------
Detailed explanation of ports in k8s:
●port
port is the port used to access the service within the k8s cluster, that is, the service can be accessed from the Node where the Pod is located through clusterIP: port
 
●nodePort
nodePort is the port for external access to the service in the k8s cluster. A service can be accessed from the outside through nodeIP: nodePort.
 
●targetPort
targetPort is the port of the Pod. Traffic from the port or nodePort is forwarded to the targetPort of the backend Pod through the kube-proxy reverse proxy load balancing, and finally enters the container.
 
●containerPort
containerPort is the port of the container inside the Pod, and targetPort is mapped to containerPort.
-------------------------------------------------- ----------------------------------------
 
//kubectl run --dry-run=client prints the corresponding API object without creating it
kubectl run nginx-test --image=nginx --port=80 --dry-run=client
kubectl create deployment nginx-deploy --image=nginx --port=80 --replicas=3 --dry-run=client
 
//View the generated yaml format
kubectl run nginx-test --image=nginx --port=80 --dry-run=client -o yaml
kubectl create deployment nginx-deploy --image=nginx --port=80 --replicas=3 --dry-run=client -o yaml
 
//View the generated json format
kubectl run nginx-test --image=nginx --port=80 --dry-run=client -o json
kubectl create deployment nginx-deploy --image=nginx --port=80 --replicas=3 --dry-run=client -o json
 
//Use yaml format to export the generated template, modify it and delete some unnecessary parameters
kubectl run nginx-test --image=nginx --port=80 --dry-run=client -o yaml > nginx-test.yaml
kubectl create deployment nginx-deploy --image=nginx --port=80 --replicas=3 --dry-run=client -o yaml > nginx-deploy.yaml
 
vim nginx-test.yaml
apiVersion: v1
Kind: Pod
metadata:
  creationTimestamp: null #Delete
  labels:
    run: nginx-test
  name: nginx-test
spec:
  containers:
  - image: nginx
    name: nginx-test
    ports:
    - containerPort: 80
    resources: {} #Delete
  dnsPolicy:ClusterFirst
  restartPolicy: Always
status: {} #Delete
 
 
//Export the existing resource generation template
kubectl get svc nginx-service -o yaml
 
//Save to file
kubectl get svc nginx-service -o yaml > my-svc.yaml
 
//View field help information, you can view the help information of related resource objects layer by layer
kubectl explain deployments.spec.template.spec.containers
or
kubectl explain pods.spec.containers
 
 
//What should I do if I am too tired to write yaml?
●Use the --dry-run command to generate
kubectl run my-deploy --image=nginx --dry-run=client -o yaml > my-deploy.yaml
 
●Export using get command
kubectl get svc nginx-service -o yaml > my-svc.yaml
or
kubectl edit svc nginx-service #Copy the configuration and paste it into a new file
 
//How to learn yaml files:
(1) Read more written by others (official) and be able to understand it
(2) Can be used according to the on-site documents
(3) If you encounter something you don’t understand, use the kubectl explain... command to check
 

Pod yaml file detailed explanation

//Detailed explanation of Pod yaml file
 
apiVersion: v1 #Required, version number, such as v1
kind: Pod #Required, Pod
metadata: #Required, metadata
  name: string #Required, Pod name
  namespace: string #Required, the namespace to which the Pod belongs
  labels: #custom labels
    - name: string #Custom label name
  annotations: #Custom annotation list
    - name: string
spec: #Required, detailed definition of the container in the Pod
  containers: #Required, list of containers in the Pod
  - name: string #Required, container name
    image: string #Required, the image name of the container
    imagePullPolicy: [Always | Never | IfNotPresent] #Policy for obtaining images: Alawys means always downloading images, IfnotPresent means using local images first, otherwise downloading images, Never means only using local images
    command: [string] #Container startup command list. If not specified, use the startup command used during packaging.
    args: [string] # Container startup command parameter list
    workingDir: string #The working directory of the container
    volumeMounts: #Storage volume configuration mounted inside the container
    - name: string #Refer to the name of the shared storage volume defined by the pod. You need to use the volume name defined in the volumes[] part.
      mountPath: string #The absolute path of the storage volume to be mounted in the container, which should be less than 512 characters.
      readOnly: boolean #Whether it is read-only mode
    ports: #List of port library numbers that need to be exposed
    - name: string #Port number name
      containerPort: int #The port number that the container needs to listen to
      hostPort: int #The port number that the host where the container is located needs to listen to, the default is the same as Container
      protocol: string #Port protocol, supports TCP and UDP, defaults to TCP
    env: #List of environment variables that need to be set before the container is run
    - name: string #Environment variable name
      value: string #The value of the environment variable
    resources: #Resource limits and request settings
      limits: #Settings of resource limits
        cpu: string #Cpu limit, unit is core number, will be used for docker run --cpu-shares parameter
        memory: string #Memory limit, the unit can be Mib/Gib, will be used for docker run --memory parameter
      requests: #Resource request settings
        cpu: string #Cpu request, the initial number available for container startup
        memory: string #The memory is clear, the initial amount available for container startup
    livenessProbe: #Set the health check of a container in a Pod. When the detection fails to respond for several times, the container will be automatically restarted. The check methods include exec, httpGet and tcpSocket. You only need to set one of these methods for a container.
      exec: #Set the inspection mode in the Pod container to exec mode
        command: [string] #The command or script that needs to be formulated in exec mode
      httpGet: #Set the health check method of the container in the Pod to HttpGet, and you need to specify Path and port.
        path: string
        port: number
        host: string
        scheme: string
        HttpHeaders:
        - name: string
          value: string
      tcpSocket: #Set the health check method of each container in the Pod to tcpSocket method
         port: number
       initialDelaySeconds: 0 #The first detection time after the container is started, in seconds
       timeoutSeconds: 0 #Timeout time for container health check detection to wait for response, unit seconds, default 1 second
       periodSeconds: 0 #Setting the periodic detection time for container monitoring and inspection, in seconds, the default is once every 10 seconds
       successThreshold: 0
       failureThreshold: 0
       securityContext:
         privileged:false
    restartPolicy: [Always | Never | OnFailure] #Pod's restart policy. Always means that once the operation is terminated in any way, the kubelet will restart. OnFailure means that the pod will only be restarted if it exits with a non-0 exit code. Never means that the pod will not be restarted. Pod
    nodeSelector: obeject #Setting NodeSelector means scheduling the Pod to the node containing this label, specified in the key: value format
    imagePullSecrets: #The secret name used when pulling the image, specified in key: secretkey format
    - name: string
    hostNetwork:false #Whether to use the host network mode, the default is false, if set to true, it means using the host network
    volumes: #Define a list of shared storage volumes on this pod
    - name: string #Shared storage volume name (there are many types of volumes)
      emptyDir: {} #Storage volume of type emtyDir, a temporary directory with the same life cycle as the Pod. is a null value
      hostPath: string #Storage volume of type hostPath, indicating the directory of the host where the Pod is mounted.
        path: string #The directory of the host where the Pod is located will be used for the mount directory in the same period.
      secret: #Storage volume of type secret, mount the cluster and defined secret objects into the container
        scretname: string
        items:
        - key: string
          path: string
      configMap: #Storage volume of type configMap, mount the predefined configMap object inside the container
        name: string
        items:
        - key: string

Detailed explanation of deployment.yaml file

apiVersion: extensions/v1beta1 #Interface version
kind: Deployment #Interface type
metadata:
  name: cango-demo #Deployment name
  namespace: cango-prd #namespace
  labels:
    app: cango-demo #tag
spec:
  replicas: 3
  strategy:
    rollingUpdate: ##Since replicas are 3, the number of pods will be between 2-4 during the entire upgrade.
      maxSurge: 1 #When rolling upgrade, 1 pod will be started first
      maxUnavailable: 1 #Maximum number of Unavailable pods allowed during rolling upgrade
  template:
    metadata:
      labels:
        app: cango-demo #Template name required
    sepc: #Define container template, which can contain multiple containers
      containers:
        - name: cango-demo #Image name
          image: swr.cn-east-2.myhuaweicloud.com/cango-prd/cango-demo:0.0.1-SNAPSHOT #Mirror address
          command: [ "/bin/sh","-c","cat /etc/config/path/to/special-key" ] #Start command
          args: #Startup parameters
            - '-storage.local.retention=$(STORAGE_RETENTION)'
            - '-storage.local.memory-chunks=$(STORAGE_MEMORY_CHUNKS)'
            - '-config.file=/etc/prometheus/prometheus.yml'
            - '-alertmanager.url=http://alertmanager:9093/alertmanager'
            - '-web.external-url=$(EXTERNAL_URL)'
    #If neither command nor args are written, then Docker’s default configuration will be used.
    #If command is written but args is not written, then Docker's default configuration will be ignored and only the command of the .yaml file (without any parameters) will be executed.
    #If command is not written but args is written, then the ENTRYPOINT command line configured by Docker by default will be executed, but the parameters called are args in .yaml.
    #If both command and args are written, then the Docker default configuration is ignored and the .yaml configuration is used.
          imagePullPolicy: IfNotPresent #Pull if it does not exist
          livenessProbe: #Indicates whether the container is in live state. If LivenessProbe fails, LivenessProbe will notify kubelet that the corresponding container is unhealthy. Then kubelet will kill the container and perform further operations according to RestarPolicy. By default, LivenessProbe is initialized to Success before the first detection. If the container does not provide LivenessProbe, it is also considered Success;
            httpGet:
              path: /health #If there is no heartbeat detection interface, it will be /
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 60 ##How long to delay after startup to start running detection
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 5
          readinessProbe:
            httpGet:
              path: /health #If there is no heartbeat detection interface, it will be /
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30 ##How long to delay after startup to start running detection
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 5
          resources: ##CPU memory limit
            requests:
              cpu: 2
              memory: 2048Mi
            limits:
              cpu: 2
              memory: 2048Mi
          env: ## Directly pass pod=custom Linux OS environment variables through environment variables
            - name: LOCAL_KEY #Local Key
              value: value
            - name: CONFIG_MAP_KEY #The bureau policy can use the configuration Key of configMap.
              valueFrom:
                configMapKeyRef:
                  name: special-config #The name found in the configmap is special-config
                  key: special.type #Find the key whose name is under data in special-config
          ports:
            - name: http
              containerPort: 8080 #Expose port to service
          volumeMounts: #Mount disks defined in volumes
          - name: log-cache
            mount: /tmp/log
          - name: sdb #Normal usage, the volume is destroyed following the container and a directory is mounted.
            mountPath: /data/media
          - name: nfs-client-root #How to directly mount the hard disk, such as mounting the following nfs directory to /mnt/nfs
            mountPath: /mnt/nfs
          - name: example-volume-config #Advanced usage Type 1, mount the log-script and backup-script of ConfigMap to a relative path path/to/... in the /etc/config directory, if they exist Files with the same name are overwritten directly.
            mountPath: /etc/config
          - name: rbd-pvc #Advanced usage step 2, mount PVC (PresistentVolumeClaim)
 
#Use volume to mount ConfigMap directly as a file or directory. Each key-value pair will generate a file, where key is the file name and value is the content.
  volumes: # Define the disk to be mounted by volumeMounts above
  - name: log-cache
    emptyDir: {}
  - name: sdb #Mount the directory on the host
    hostPath:
      path: /any/path/it/will/be/replaced
  - name: example-volume-config # Used for ConfigMap file content to the specified path
    configMap:
      name: example-volume-config #ConfigMap name
      items:
      - key: log-script #Key in ConfigMap
        path: path/to/log-script #Specify a relative path path/to/log-script in the directory
      - key: backup-script #Key in ConfigMap
        path: path/to/backup-script #Specify a relative path path/to/backup-script in the directory
  - name: nfs-client-root #For mounting NFS storage type
    nfs:
      server: 10.42.0.55 #NFS server address
      path: /opt/public #showmount -e Take a look at the path
  - name: rbd-pvc #Mount PVC disk
    persistentVolumeClaim:
      claimName: rbd-pvc1 #Mount the applied pvc disk

Service yaml file detailed explanation

apiVersion: v1
Kind: Service
matadata: #metadata
  name: string #service name
  namespace: string #namespace
  labels: #Custom label attribute list
    - name: string
  annotations: #Custom annotation attribute list
    - name: string
spec: #Detailed description
  selector: [] #label selector configuration, Pods with label labels will be selected as management
                                         #scope
  type: string #The type of service, specifies the access method of service, the default is
                                         #clusterIp
  clusterIP: string #Virtual service address
  sessionAffinity: string #Whether session is supported
  ports: #service needs to expose the port list
  - name: string #Port name
    protocol: string #Port protocol, supports TCP and UDP, defaults to TCP
    port: int #The port number that the service listens to
    targetPort: int #The port number that needs to be forwarded to the backend Pod
    nodePort: int #When type = NodePort, specify the port number mapped to the physical machine
  status: #When spce.type=LoadBalancer, set the address of the external load balancer
    loadBalancer: #External load balancer
      ingress: #External load balancer
        ip: string #The IP address value of the external load balancer
        hostname: string #Hostname of the external load balancer

The knowledge points of the article match the official knowledge archives, and you can further learn relevant knowledge. Cloud native entry-level skills treeContainer orchestration (production environment k8s)kubelet, kubectl, kubeadm three-piece set 16889 people Currently studying the system