kubernetes (k8s) allocates memory resources for containers and Pods

kubernetes (k8s) allocates memory resources for containers and Pods

Shows how to assign memory requests (request) and memory limits (limit) to a container. We guarantee that the container has the amount of memory it requests, but are not allowed to use more than the limit.

Create a new namespace

kubectl create namespace mem-example

Specify memory requests and limits

Edit the yaml file

#Create a Pod with a container
apiVersion: v1
kind: Pod
metadata:
  name: memory-demo
  namespace: mem-example
spec:
  containers:
  - name: memory-demo-ctr
    image: polinux/stress
#The container will request 100MiB of memory, and the memory will be limited within 200MiB
    resources:
      requests:
        memory: "100Mi"
      limits:
        memory: "200Mi"
#Use the stress test tool stress when the container starts
    command: ["stress"]
# "--vm","1" spawns one child process; --vm 2 spawns two processes
# "--vm-bytes","150M" Each process allocates 150M
# "--vm-hang","1" means that each memory-consuming process goes to sleep for 1 second after allocating memory, and then releases the memory and repeats this process
    args: ["--vm","1","--vm-bytes","150M","--vm-hang","1"]

The args section of the configuration file provides arguments for the container to start with. The “–vm-bytes”, “150M” parameter tells the container to try to allocate 150 MiB of memory.
Create a pod

kubectl apply -f memory-request-limit.yaml

Check whether the Pod is running normally

kubectl get pod memory-demo -n mem-example


View details about Pods

kubectl get pod memory-demo -n mem-example -o yaml

The output shows that the containers in this Pod have a memory request of 100 MiB and a memory limit of 200 MiB.

Run the kubectl top command to get the metrics data of the Pod:

If the following problems occur

W0323 15:03:25.034693 2441 top_pod.go:140] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing –use-protocol-buffers flag
error: Metrics API not available

Resolution

Download the deployment file

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server-components.yaml

Modify mirror address

sed -i 's/registry.k8s.io/metrics-server\/registry.cn-hangzhou.aliyuncs.com\/google_containers/g' metrics-server-components.yaml

If the change is unsuccessful, you can change it manually or refer to the article “The Most Complete Operation Tutorial in History – Using Alibaba Cloud’s FREE Mirror Warehouse to Build a Foreign Docker Image”

Add this line of content “- –kubelet-insecure-tls” to the metrics-server-components.yaml file, and the CA of the service certificate provided by Kubelets will not be verified.

Deploy metrics-server

kubectl apply -f metrics-server-components.yaml

View metrics data for this Pod

kubectl top pod memory-demo --namespace=mem-example


The output shows that the Pod is using about 150 MiB of memory. This is larger than the Pod’s requested 100 MiB, but within the Pod’s limit of 200 MiB.

Memory exceeded container limit

When the node has enough memory available, the container can use the memory it requests. However, a container is not allowed to use more memory than its limit. If a container allocates more memory than its limit, the container becomes a candidate for termination. Terminates the container if it continues to consume memory beyond its limit. If the terminated container can be restarted, the kubelet will restart it, just like any other kind of runtime failure.

Creates a Pod that attempts to allocate memory beyond its limit. This is the configuration file for a Pod that has a container with a memory request of 50 MiB and a memory limit of 100 MiB

Write memory-request-limit2.yaml

apiVersion: v1
kind: Pod
metadata:
  name: memory-demo-2
  namespace: mem-example
spec:
  containers:
  - name: memory-demo-2-ctr
    image: polinux/stress
    resources:
#Request 50MiB
      requests:
        memory: "50Mi"
#Limited to 100MiB
      limits:
        memory: "100Mi"
# allocate 250MiB memory
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "250M", "--vm-hang", "1"]

Create a Pod that exceeds the limit and view information about the container

#Create Pod
kubectl apply -f memory-request-limit2.yaml

# View container related information
kubectl get pod memory-demo-2 --namespace=mem-example
NAME READY STATUS RESTARTS AGE
memory-demo-2 0/1 OOM Killed 2 26s
At this point, the container may be running or killed. Repeat the previous command until the container is killed

# View more detailed status information
kubectl get pod memory-demo-2 --output=yaml --namespace=mem-example



The output shows: The container has been killed due to out of memory (OOM)

Containers can be restarted, so the kubelet will restart it. Run the following command multiple times, you can see that the container is repeatedly killed and restarted, and the output shows: the container is killed, restarted, killed again, and restarted…

kubectl get pod memory-demo-2 --namespace=mem-example

NAME READY STATUS RESTARTS AGE
memory-demo-2 0/1 CrashLoopBackOff 6 6m19s

View detailed information about the Pod’s history

kubectl describe pod memory-demo-2 --namespace=mem-example

Normal Pulled 8m16s kubelet Successfully pulled image "polinux/stress" in 678.098976ms
Warning BackOff 3m58s (x25 over 8m58s) kubelet Back-off restarting failed container

Memory exceeding the capacity of the entire node

Memory requests and limits are associated with containers, but it is also useful to think of Pods as having memory requests and limits. A Pod’s memory request is the sum of the memory requests of all containers in the Pod. Similarly, a Pod’s memory limit is the sum of the memory limits of all containers in the Pod.

Pods are scheduled based on requests. A Pod is scheduled to run on a Node only if the Node has enough memory to satisfy the Pod’s memory request.

A Pod will be created with a memory request that exceeds what any one node in your cluster has. Here is the configuration file for the Pod with a container requesting 1000 GiB of memory, which should be more than any node in your cluster can hold.

Write memory-request-limit3.yaml

apiVersion: v1
kind: Pod
metadata:
  name: memory-demo-3
  namespace: mem-example
spec:
  containers:
  - name: memory-demo-3-ctr
    image: polinux/stress
    resources:
      requests:
        memory: "1000Gi"
      limits:
        memory: "1000Gi"
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]

Create a Pod container

kubectl apply -f memory-request-limit3.yaml

View container status

kubectl get pod memory-demo-3 -n mem-example

The output shows: Pod is in PENDING state. This means that the Pod is not scheduled to run on any node, and it will remain that way indefinitely

kubectl get pod memory-demo-3 --namespace=mem-example
NAME READY STATUS RESTARTS AGE
memory-demo-3 0/1 Pending 0 25s

View detailed information about Pods, including events

kubectl describe pod memory-demo-3 --namespace=mem-example

  Type Reason Age From Message
  ---- ------- ---- ---- -------
  Warning FailedScheduling 3m47s default-scheduler 0/1 nodes are available: 1 Insufficient memory.
  Warning FailedScheduling 3m46s default-scheduler 0/1 nodes are available: 1 Insufficient memory.

If no memory limit is specified

If you do not specify a memory limit for a container, one of the following is automatically followed:

Containers have unlimited memory usage. A container can use all available memory on its host node, which may cause the node to invoke the OOM Killer. Also, there is a greater likelihood that containers without resource constraints will be killed if an OOM Kill occurs.

If the namespace of the running container has a default memory limit, then the container will be automatically assigned the default limit. Cluster administrators can use LimitRange to specify default memory limits.

Purpose of memory requests and limits

By configuring memory requests and limits for containers running in the cluster, you can efficiently utilize the memory resources available on the cluster nodes. By keeping your Pod’s memory requests low, you can better pod scheduling. By making the memory limit larger than the memory request, you can accomplish two things:

Pods can burst into activity to make better use of available memory.
Pods are limited to a reasonable amount of memory available to them during bursts of activity.