Table of Contents
1. Introduction to yaml and json
1. Introduction to yuml language
2. File formats supported by k8s
2. Declarative object management
1. Detailed explanation of deployment.yaml file
2. Detailed explanation of Pod yaml file
3. Detailed explanation of Service yaml file
3. Prepare resource allocation list
1. Write yaml file
2. Create and view pod resources
3. Create a service service to provide external access and test it
4. Create resources and view services
5. Browser access test
4. Trial operation and format
1. -dry-run: trial run
2. View the generated yaml format
3. View the generated json format
4. Export the generated template using yaml format
5. Export existing resource generation templates
1. Introduction to yaml and json
1. Yuml language introduction
YAML is a markup language similar to XML and JSON. It emphasizes data as the center and does not focus on markup language. The definition of YAML itself is relatively simple. Known as “a humanized data format language”.
YAML syntax format:
- Case Sensitive
- Use indentation to identify hierarchical relationships
- Tabs are not allowed for indentation, only spaces are allowed (lower version restrictions)
- The number of spaces for indentation does not matter, as long as elements at the same level are left aligned
- “#” indicates comment
2. File formats supported by k8s
Kubernetes supports YAML and JSON formats to manage resource objects
- JSON format: mainly used for message transmission between API interfaces
- YAML format: used for configuration and management. YAML is a concise, non-markup language with user-friendly content format and easier to read.
2. Declarative Object Management
##View api resource version tag kubectl api-versions #If it is a business scenario, it is generally preferred to use apps/v1 #The words "beta" represent the test version and are not used in the production environment
1. Detailed explanation of deployment.yaml file
apiVersion: extensions/v1beta1 #Interface version kind: Deployment #Interface type metadata: name: cango-demo #Deployment name namespace: cango-prd #namespace labels: app: cango-demo #tag spec: replicas: 3 strategy: rollingUpdate: ##Since replicas are 3, the number of pods will be between 2-4 during the entire upgrade. maxSurge: 1 #When rolling upgrade, 1 pod will be started first maxUnavailable: 1 #Maximum number of Unavailable pods allowed during rolling upgrade template: metadata: labels: app: cango-demo #Template name required sepc: #Define container template, which can contain multiple containers containers: - name: cango-demo #Image name image: swr.cn-east-2.myhuaweicloud.com/cango-prd/cango-demo:0.0.1-SNAPSHOT #Mirror address command: [ "/bin/sh","-c","cat /etc/config/path/to/special-key" ] #Start command args: #Startup parameters - '-storage.local.retention=$(STORAGE_RETENTION)' - '-storage.local.memory-chunks=$(STORAGE_MEMORY_CHUNKS)' - '-config.file=/etc/prometheus/prometheus.yml' - '-alertmanager.url=http://alertmanager:9093/alertmanager' - '-web.external-url=$(EXTERNAL_URL)' #If neither command nor args are written, then Docker’s default configuration will be used. #If command is written but args is not written, then Docker's default configuration will be ignored and only the command of the .yaml file (without any parameters) will be executed. #If command is not written but args is written, then the ENTRYPOINT command line configured by Docker by default will be executed, but the parameters called are args in .yaml. #If both command and args are written, then the Docker default configuration is ignored and the .yaml configuration is used. imagePullPolicy: IfNotPresent #Pull if it does not exist livenessProbe: #Indicates whether the container is in live state. If LivenessProbe fails, LivenessProbe will notify kubelet that the corresponding container is unhealthy. Then kubelet will kill the container and perform further operations according to RestarPolicy. By default, LivenessProbe is initialized to Success before the first detection. If the container does not provide LivenessProbe, it is also considered Success; httpGet: path: /health #If there is no heartbeat detection interface, it will be / port: 8080 scheme: HTTP initialDelaySeconds: 60 ##How long to delay after startup to start running detection timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /health #If there is no heartbeat detection interface, it will be / port: 8080 scheme: HTTP initialDelaySeconds: 30 ##How long to delay after startup to start running detection timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 resources: ##CPU memory limit requests: cpu: 2 memory: 2048Mi limits: cpu: 2 memory: 2048Mi env: ## Directly pass pod=custom Linux OS environment variables through environment variables - name: LOCAL_KEY #Local Key value: value - name: CONFIG_MAP_KEY #The bureau policy can use the configuration Key of configMap. valueFrom: configMapKeyRef: name: special-config #The name found in the configmap is special-config key: special.type #Find the key whose name is under data in special-config ports: - name: http containerPort: 8080 #Expose port to service volumeMounts: #Mount disks defined in volumes - name: log-cache mount: /tmp/log - name: sdb #Normal usage, the volume is destroyed following the container and a directory is mounted. mountPath: /data/media - name: nfs-client-root #How to directly mount the hard disk, such as mounting the following nfs directory to /mnt/nfs mountPath: /mnt/nfs - name: example-volume-config #Advanced usage Type 1, mount the log-script and backup-script of ConfigMap to a relative path path/to/... in the /etc/config directory, if they exist Files with the same name are overwritten directly. mountPath: /etc/config - name: rbd-pvc #Advanced usage step 2, mount PVC (PresistentVolumeClaim) #Use volume to mount ConfigMap directly as a file or directory. Each key-value pair will generate a file, where key is the file name and value is the content. volumes: # Define the disk to be mounted by volumeMounts above - name: log-cache emptyDir: {} - name: sdb #Mount the directory on the host hostPath: path: /any/path/it/will/be/replaced - name: example-volume-config # Used for ConfigMap file content to the specified path configMap: name: example-volume-config #ConfigMap name items: - key: log-script #Key in ConfigMap path: path/to/log-script #Specify a relative path path/to/log-script in the directory - key: backup-script #Key in ConfigMap path: path/to/backup-script #Specify a relative path path/to/backup-script in the directory - name: nfs-client-root #For mounting NFS storage type nfs: server: 10.42.0.55 #NFS server address path: /opt/public #showmount -e Take a look at the path - name: rbd-pvc #Mount PVC disk persistentVolumeClaim: claimName: rbd-pvc1 #Mount the applied pvc disk
2. Detailed explanation of Pod yaml file
apiVersion: v1 #Required, version number, such as v1 kind: Pod #Required, Pod metadata: #Required, metadata name: string #Required, Pod name namespace: string #Required, the namespace to which the Pod belongs labels: #custom labels - name: string #Custom label name annotations: #Custom annotation list - name: string spec: #Required, detailed definition of the container in the Pod containers: #Required, list of containers in the Pod - name: string #Required, container name image: string #Required, the image name of the container imagePullPolicy: [Always | Never | IfNotPresent] #Policy for obtaining images: Alawys means always downloading images, IfnotPresent means using local images first, otherwise downloading images, Never means only using local images command: [string] #Container startup command list. If not specified, use the startup command used during packaging. args: [string] # Container startup command parameter list workingDir: string #The working directory of the container volumeMounts: #Storage volume configuration mounted inside the container - name: string #Refer to the name of the shared storage volume defined by the pod. You need to use the volume name defined in the volumes[] part. mountPath: string #The absolute path of the storage volume to be mounted in the container, which should be less than 512 characters. readOnly: boolean #Whether it is read-only mode ports: #List of port library numbers that need to be exposed - name: string #Port number name containerPort: int #The port number that the container needs to listen to hostPort: int #The port number that the host where the container is located needs to listen to, the default is the same as Container protocol: string #Port protocol, supports TCP and UDP, defaults to TCP env: #List of environment variables that need to be set before the container is run - name: string #Environment variable name value: string #The value of the environment variable resources: #Resource limits and request settings limits: #Settings of resource limits cpu: string #Cpu limit, unit is core number, will be used for docker run --cpu-shares parameter memory: string #Memory limit, the unit can be Mib/Gib, will be used for docker run --memory parameter requests: #Resource request settings cpu: string #Cpu request, the initial number available for container startup memory: string #The memory is clear, the initial amount available for container startup livenessProbe: #Set the health check of a container in a Pod. When the detection fails to respond for several times, the container will be automatically restarted. The check methods include exec, httpGet and tcpSocket. You only need to set one of these methods for a container. exec: #Set the inspection mode in the Pod container to exec mode command: [string] #The command or script that needs to be formulated in exec mode httpGet: #Set the health check method of each container in the Pod to HttpGet, and you need to specify Path and port. path: string port:number host: string scheme: string HttpHeaders: - name: string value: string tcpSocket: #Set the health check method of each container in the Pod to tcpSocket method port: number initialDelaySeconds: 0 #The first detection time after the container is started, in seconds timeoutSeconds: 0 #Timeout time for container health check detection to wait for response, unit seconds, default 1 second periodSeconds: 0 #Setting the periodic detection time for container monitoring and inspection, in seconds, the default is once every 10 seconds successThreshold: 0 failureThreshold: 0 securityContext: privileged:false restartPolicy: [Always | Never | OnFailure] #Pod's restart policy. Always means that once the kubelet is terminated no matter how it is run, the kubelet will restart. OnFailure means that the pod will only be restarted if it exits with a non-0 exit code. Never means that the pod will not be restarted. Pod nodeSelector: obeject #Setting NodeSelector means scheduling the Pod to the node containing this label, specified in the key: value format imagePullSecrets: #The secret name used when pulling the image, specified in key: secretkey format - name: string hostNetwork:false #Whether to use the host network mode, the default is false, if set to true, it means using the host network volumes: #Define a list of shared storage volumes on this pod - name: string #Shared storage volume name (there are many types of volumes) emptyDir: {} #Storage volume of type emtyDir, a temporary directory with the same life cycle as the Pod. is a null value hostPath: string #Storage volume of type hostPath, indicating the directory of the host where the Pod is mounted. path: string #The directory of the host where the Pod is located will be used for the mount directory in the same period. secret: #Storage volume of type secret, mount the cluster and defined secret objects into the container scretname: string items: - key: string path: string configMap: #Storage volume of type configMap, mount the predefined configMap object inside the container name: string items: - key: string
3. Detailed explanation of Service yaml file
apiVersion: v1 Kind: Service matadata: #metadata name: string #service name namespace: string #namespace labels: #Custom label attribute list - name: string annotations: #Custom annotation attribute list - name: string spec: #Detailed description selector: [] #label selector configuration, Pods with label labels will be selected as management #scope type: string #The type of service, specifies the access method of service, the default is #clusterIp clusterIP: string #Virtual service address sessionAffinity: string #Whether session is supported ports: #service needs to expose the port list - name: string #Port name protocol: string #Port protocol, supports TCP and UDP, defaults to TCP port: int #The port number that the service listens to targetPort: int #The port number that needs to be forwarded to the backend Pod nodePort: int #When type = NodePort, specify the port number mapped to the physical machine status: #When spce.type=LoadBalancer, set the address of the external load balancer loadBalancer: #External load balancer ingress: #External load balancer ip: string #The IP address value of the external load balancer hostname: string #Hostname of the external load balancer
3. Write resource configuration list
1. Write yaml file
vim nginx-deployment.yaml apiVersion: apps/v1 #Specify the api version label kind: Deployment #Define the type/role of the resource. Deployment is the copy controller. The resource type here can be Deployment, Job, Ingress, Service, etc. metadata: #Define metadata information of resources, such as resource name, namespace, tags and other information name: nginx-deployment #Define the name of the resource, which must be unique in the same namespace labels: #Define Deployment resource labels app: nginx spec: #Define the parameter attributes required by the deployment resource, such as whether to restart the container when the container fails. replicas: 3 #Define the number of replicas selector: #Define label selector matchLabels: #Define matching labels app: nginx #Need to be consistent with the labels defined by .spec.template.metadata.labels template: #Define the business template. If there are multiple copies, the attributes of all copies will be matched according to the relevant configuration of the template. metadata: labels: #Define the labels that the Pod copy will use, which must be consistent with the labels defined by .spec.selector.matchLabels app: nginx spec: containers: #Define container properties - name: nginx #Define a container name, a - name: define a container image: nginx:1.15.4 #Define the image and version used by the container ports: - containerPort: 80 #Define the external port of the container -------------------------------------------------- ------------- vim nginx-deployment.yaml apiVersion: apps/v1 Kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.15.4 ports: - containerPort: 80
2. Create and view pod resources
//Create resource object kubectl apply -f nginx-deployment.yaml //View the created pod resources kubectl get pods -o wide
3. Create a service service to provide external access and test it
vim nginx-service.yaml apiVersion: v1 Kind: Service metadata: name: nginx-service labels: app: nginx spec: type: NodePort ports: - port: 80 targetPort: 80 selector: app: nginx
4. Create resources and view services
kubectl apply -f nginx-service.yaml //View the created service kubectl get svc
5. Browser access test
##Enter nodeIP:nodePort in the browser to access http://192.168.247.10:31562
Detailed explanation of ports in k8s:
- port:port is the port used to access the service within the k8s cluster, that is, the service can be accessed from the Node where the Pod is located through clusterIP: port
- nodePort:nodePort is the port for external access to the service in the k8s cluster. A service can be accessed from the outside through nodeIP: nodePort.
- targetPort: targetPort is the port of the Pod. Traffic from the port or nodePort is forwarded to the targetPort of the backend Pod through the kube-proxy reverse proxy load balancing, and finally enters the container.
- containerPort:containerPort is the port of the container inside the Pod, and targetPort maps to containerPort.
4. Trial run and format
1, -dry-run: trial run
- –dry-run: means trial run, and the naming will not be actually executed (used to test whether the command is correct), that is, the pod and deployment instances will not be actually created. After removing this parameter, you can actually execute it. Order.
kubectl run nginx-test --image=nginx --port=80 --dry-run=client kubectl create deployment nginx-deploy --image=nginx --port=80 --replicas=3 --dry-run=client
2. View the generated yaml format
- Use –dry-run to test run without triggering the generated command, and then use -o yaml to view its yaml resource configuration list
kubectl run nginx-test --image=nginx --port=80 --dry-run=client -o yaml #Try running a pod and display its yaml configuration format
3. View the generated json format
- You can view the json configuration list generated by this command through -o json
kubectl run nginx-test --image=nginx --port=80 --dry-run=client -o json #Try running a pod controller and display the pod configuration information
4. Export the generated template using yaml format
kubectl run nginx-test --image=nginx --port=80 --dry-run=client -o yaml > nginx-test.yaml #Try running the pod controller and display it in yaml format, append the structure to the specified yaml file
5. Export the existing resource generation template
#Generate template kubectl get svc nginx-service -o yaml #Generate and export templates kubectl get svc nginx-service -o yaml > my-svc.yaml
View field help information, you can view the help information of related resource objects layer by layer
kubectl explain deployments.spec.template.spec.containers or kubectl explain pods.spec.containers
Q: How to write yaml file? 1. Use the --dry-run command to generate 2. Export using the get command