Docker container and virtualization technology: Docker resource control, data management

Table of Contents

1. Theory

1. Resource Control

2. Docker data management

2. Experiment

1.Docker resource control

2. Docker data management

Three, the problem

1. The failure of the docker container causes a large number of log sets to be full, causing the disk space to be full

2. What to do when the log is full

Four. Summary


1. Theory

1. Resource Control

(1) CPU resource control

cgroups is a very powerful Linux kernel tool. It can not only limit resources isolated by namespace, but also set weights for resources, calculate usage, control process start and stop, etc. So cgroups (Control groups) realize the quota and measurement of resources.

(2) Four major functions of cgroups

 ● Resource limit: You can limit the total amount of resources used by the task

●Priority allocation: through the number of allocated cpu time slices and the size of disk IO bandwidth, it is actually equivalent to controlling the task running priority

●Resource statistics: You can count the resource usage of the system, such as cpu duration, memory usage, etc.

●Task control: cgroup can suspend and resume tasks

(3) Set the upper limit of CPU usage

Linux uses CFS (Completely Fair Scheduler) to schedule the CPU usage of each process. The default scheduling period of CFS is 100ms. (100,000 microseconds)

We can set the scheduling cycle of each container process, and how much CPU time each container can use at most during this cycle.

Use –cpu-period to set the scheduling period, use –cpu-quota to set the CPU time that the container can use in each period. Both can be used together.

The effective range of the CFS period is 1ms~1s, and the corresponding value range of –cpu-period is 1000~100000.

The CPU quota of the container must not be less than 1ms, that is, the value of –cpu-quota must be >= 1000.

Start a container:

docker run -itd --name test1 centos:7 /bin/bash

cpu.cfs_period_us: The period of cpu allocation (microseconds, so the file name is represented by us), the default is 100000.

cpu.cfs_quota_us: Indicates the time (in microseconds) taken by the control group limit, the default is -1, which means no limit. If it is set to 50000, it means that 50000/100000=50% of the CPU is occupied.

①Conduct CPU stress test

Write a script:

docker exec -it 3ed82355f811 /bin/bash
vim/cpu.sh
#!/bin/bash
i=0
while true
do
let i++
done

run:

chmod +x /cpu.sh
./cpu.sh
exit

②Set a 50% ratio to allocate the upper limit of CPU usage time

docker run -itd --name test2 --cpu-quota 50000 centos:7 /bin/bash #You can recreate a container and set the quota
or
cd /sys/fs/cgroup/cpu/docker/3ed82355f81151c4568aaa6e7bc60ba6984201c119125360924bf7dfd6eaa42b/
echo 50000 > cpu.cfs_quota_us
docker exec -it 3ed82355f811 /bin/bash
./cpu.sh
exit

(4) Set the CPU resource usage ratio (only valid when multiple containers are set)

Docker specifies CPU shares through –cpu-shares, the default value is 1024, and the value is a multiple of 1024.

example:

Create two containers as c1 and c2. If there are only these two containers, set the weight of the containers so that the CPU resources of c1 and c2 account for 1/3 and 2/3.

docker run -itd --name c1 --cpu-shares 512 centos:7
docker run -itd --name c2 --cpu-shares 1024 centos:7

Enter the containers separately and conduct pressure tests.

yum install -y epel-release
yum install stress -y
stress -c 4 

(5) Set the container to bind the specified CPU

First allocate 4 CPU cores to the virtual machine

Create container

docker run -itd --name cc1 --cpuset-cpus 1,3 centos:7 /bin/bash

Enter the container and perform a pressure test

yum install -y epel-release
yum install stress -y
stress -c 4
exit

Exit the container, execute the top command and press 1 to view the CPU usage

(6) Restrictions on disk IO quota control (blkio)

–device-read-bps: Limit the read speed bps (data volume) on a device, the unit can be kb, mb(M) or gb.

example:

docker run -itd --name test4 --device-read-bps /dev/sda:1M centos:7 /bin/bash

–device-write-bps: Limit the write speed bps (data volume) on a certain device, the unit can be kb, mb (M) or gb.

example:

docker run -itd --name test5 --device-write-bps /dev/sda:1M centos:7 /bin/bash

–device-read-iops: Limit the iops (number of times) of reading a device

–device-write-iops: Limit the iops (number of times) written to a device

Create a container and limit the write speed

docker run -it --name cc3 --device-write-bps /dev/sda:1mb centos:7 /bin/bash

Verify write speed through dd

dd if=/dev/zero of=test.out bs=1M count=10 oflag=direct #Add the oflag parameter to avoid the file system cache

2.Docker data management

(1) Data volume

A data volume is a special directory used by containers, located within the container. The host’s directory can be mounted to the data volume. Modifications to the data volume are immediately visible, and updated data will not affect the mirror, thus enabling data migration between the host and the container. The use of data volumes is similar to the mount operation of directories under Linux.

example:

Download a mirror first

docker pull centos:7

The host directory /var/www is mounted to /data1 in the container.

Note: The path to the host’s local directory must be an absolute path. If the path does not exist, Docker will automatically create the corresponding path.

docker run -V /var/www:/data1 --name web1 -it centos:7 /bin/bash
#-v option can create data volumes within the container
 
ls
echo "this is cc1" > /data1/abc.txt
exit
 
#Return to the host machine to view
cat /var/www/abc.txt

(2) Data volume container

If you need to share some data between containers, the easiest way is to use a data volume container. The data volume container is an ordinary container that specifically provides data volumes for other containers to mount and use.

example:

Create a container as a data volume container

docker run --name web2 -v /data1 -v /data2 -it centos:7 /bin/bash
echo "this is web2" > /data1/abc.txt
echo "THIS IS WEB2" > /data2/ABC.txt

Use –volumes-from to mount the data volume in the web2 container to the new container

docker run -it --volumes-from web2 --name web3 centos:7 /bin/bash
cat /data1 /abc.txt
cat /data2/ABC.txt

(3)Port mapping

When starting the container, if the corresponding port is not specified, the service inside the container cannot be accessed through the network outside the container. The port mapping mechanism provides services in the container for external network access. In essence, it maps the host’s port to the container, so that the external network can access the services in the container by accessing the host’s port.

docker run -d --name test1 -P nginx
#Randomly map ports (starting from 32768)
docker run -d --name test2 -p 43000:80 nginx
#Specify mapped port

(4) Container interconnection (using centos image)

Container interconnection is to establish a dedicated network communication tunnel between containers through the name of the container. To put it simply, a tunnel will be established between the source container and the receiving container, and the receiving container can see the information specified by the source container

example:

docker run -itd -P --name xx1 centos:7 /bin/bash

Create and run the receiving container named xx2, use the –1ink option to specify the connection container to achieve container interconnection

docker run -itd -P --name xx2 --link xx1:xx2 centos:7 /bin/bash

Enter the xx2 container and ping xxl

docker exec -it xx2 bash

2. Experiment

1.Docker resource control

(1) Set the upper limit of CPU usage

Perform a CPU stress test

Write a script:

View CPU resources occupied by scripts

Set a 50% ratio to allocate the upper limit of CPU usage time

It can be seen that the CPU usage is close to 50%, and cgroups have an effect on the control of the CPU

(2) Set the CPU resource usage ratio (only valid when multiple containers are set)

Create two containers as c1 and c2. If there are only these two containers, set the weight of the containers so that the CPU resources of c1 and c2 account for 1/3 and 2/3.

Enter the containers separately and conduct pressure tests.

Container c1

Container c2

View container running status (dynamic updates)

The CPU sum is 400%. Because the virtual machine uses 4 cores, and the CPU usage percentage of C1 and C2 is about 1:2

(3) Set the container to bind the specified CPU

4 CPU cores have been allocated to the virtual machine

Create container

Only allow cc1 container to use 2nd and 4th CPU

Enter the container

Install dependency packages

Install the stress testing tool

Perform stress testing

Execute top command

Press 1 again to check the CPU usage, only the 2nd and 4th CPU are used

(4) Restrictions on disk IO quota control (blkio)

Create a container and limit the write speed

You can see that the writing speed is limited

Clean up the disk space occupied by docker:

Used to clean disks and delete closed containers, useless data volumes and networks

2.Docker data management

(1) Data volume

Download a mirror first

The host directory /var/www is mounted to /data1 in the container.

Note: The path to the host’s local directory must be an absolute path. If the path does not exist, Docker will automatically create the corresponding path.

The file content is the same

(2) Data volume container

Create a container as a data volume container

Use –volumes-from to mount the data volumes in the web2 container to the new container

It is a file created in the previous container, indicating that the mounting was successful

(3) Port mapping

When starting a container, if you do not specify the corresponding port, the services in the container cannot be accessed through the network outside the container. The port mapping mechanism provides services in the container for external network access. In essence, it maps the host’s port to the container, so that the external network can access the services in the container by accessing the host’s port.

(4) Container interconnection (using centos image)

Container interconnection is to establish a dedicated network communication tunnel between containers through the name of the container. To put it simply, a tunnel will be established between the source container and the receiving container. The receiving container can see the information specified by the source container.

Create and run the source container named xx1

Create and run the receiving container named xx2, and use the –1ink option to specify the connection container to achieve container interconnection.

Enter the xx2 container and ping xxl

3. Question

1. The docker container failure caused a large number of log collections to be full, causing the disk space to be full

(1) Solution
clear log

#!/bin/bash
logs=$ (find /var/lib/docker/containers/ -name *-json.log*)
for log in $logs
do
cat /dev/null > $log
done

2. What to do when the log is full

###Set the number of docker log files and the size of each log
 vim /etc/docker/daemon.json
{
"registry-mirrors": ["http://f613ce8f.m.daocloud.io"],
"log-driver": "json-file", #my log format
"log-opts": { "max-size" : "500m", "max-file" : "3"} The maximum log parameter is 500M. I have three log files in the largest container, and the size of each log file is 500M.
}
After modification, you need to reload systemctl daemon-reload

4. Summary

Clean up the disk space occupied by docker:

docker system prune -a #Can be used to clean up disks, delete closed containers, useless data volumes and networks

The knowledge points of the article match the official knowledge files, and you can further learn relevant knowledge Cloud native entry skill treeContainer (docker)Install docker14943 people are learning the system