[root@ip-172-31-32-147 centos]#docker run --rm --memory 50mb alpine free -m
total used free shared buff/cache available
Mem: 989 565 141 0 281 296
Swap: 0 0 0
Now when we run the “free -m” on the host machine, we can see the same output as above,
[root@ip-172-31-32-147 centos]# free -m
total used free shared buff/cache available
Mem: 989 499 175 19 314 329
Swap: 0 0 0
If we observe, the container memory is set with 50mb but still we see a different memory value. This is a very important thing to understand, docker shows all available memory as container memory. The default memory for a container in Mac OSX is 2gb and it will show all available memory of the host as container memory in linux. If we want to see the amount of memory allocated to the container , we need to check the allowed memory of the container from /sys/fs/cgroup/memory/memory.limit_in_bytes file from inside the container as below,
[root@ip-172-31-32-147 centos]# docker run --rm --memory 50mb alpine cat /sys/fs/cgroup/memory/memory.limit_in_bytes
52428800
The important thing we need to understand is that a container has no resource constraints and can use as much of a given resource as the host kernel scheduler allows. It is the responsibility of the developer to control how much memory, or CPU a container can use. This can be set using the runtime configuration flags for the docker run command. In this article, we will see how resources like memory and CPU are managed. Docker allocates resources from Host with 3 resources,
RAM
CPU
I/O Bandwidth
Docker Memory
Docker provides us various ways in setting memory and swap to the container using the run command. The memory flags available for setting in the Docker run command are,
--memory : Hard limit of memory
--memory-reservation : soft limit of memory
--memory-swap : Swap setting
--oom-kill-disable : OOM kill disable
Soft and Hard limits - Docker memory can be set in either soft and hard settings. The hard and soft limits can be set using the “--memory-reservation” and “--memory” flags to the run command. For example, if we set the hard limit as 250mb and the soft limit as 230mb, this means the memory consumption of the process running inside the container can use upto 250mb of memory and can rise from that. This limit can be considered as a warning limit. If we set a soft limit as 230mb and hard limit as 250mb , the process inside the container can upto 230mb and cross that but it cannot cross 250mb.
An example to set hard limit and soft limit looks as,
docker run -d -p 8081:80 --memory-reservation="256m" --memory="256m" nginx
Now if we run the above command,
[root@ip-172-31-11-133]# docker run --memory 50m --rm -it progrium/stress --vm 1 --vm-bytes 62914560 --timeout 2s
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] setting timeout to 55s
stress: dbug: [1] --> hogvm worker 1 [6] forked
stress: dbug: [6] allocating 62914560 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: dbug: [6] freed 62914560 bytes
stress: dbug: [6] touching bytes in strides of 4096 bytes
stress: dbug: [1] (416) <-- worker 6 signalled normally
stress: info: [1] successfully run completed in 2s
We can see the through the container is allocated with 50mb, the stress command still is able to use 62mb of memory. This should not be the case, if the container is set with 50mb and once the 50mb is allocated, it should throw a OOM error. But it did not happen, this is where swap comes into picture.
If we run a container with a memory limitation and no swap definition, then the container uses a hard limit of 50mb and swap of 100mb which means the container has 150mb of memory. The swap is set to the container using the “--memory-swap” flag to container run command. The swap will always be set with the double the amount of memory set. If we set the memory as 10mb , the swap will automatically set to 20mb.
Now lets run the same above container run by adding the swap command as below,
[root@ip-172-31-11-133]# docker run --memory 50m --memory-swap 50m --rm -it progrium/stress --vm 1 --vm-bytes 62914560 --timeout 2s
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] setting timeout to 2s
stress: dbug: [1] --> hogvm worker 1 [6] forked
stress: dbug: [6] allocating 62914560 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: FAIL: [1] (416) <-- worker 6 got signal 9
stress: WARN: [1] (418) now reaping child worker processes
stress: FAIL: [1] (422) kill error: No such process
stress: FAIL: [1] (452) failed run completed in 0s
Now though the memory and swap is set, the container should not be killed but should run until 150mb ( 50mb hard limit and 100mb swap, since the swap will be set with double of the memory). So the container has 150mb, but still the container exits. There are few limitations with how docker allocates memory and swap to a container,
Note : If both memory and memory-swap are set to the same value, the hard limit is the amount set for memory but swap will never be used. In the above case, the hard limit is set to 50mb and swap is set to 50mb but when the container runs the hard limit of 50mb is set and swap is ignored.
Note : If memory and memory-swap are given different values then only swap will be used.
For example,
memory=20mb & memory-swap=20mb : swap will never be used since the hard limit is set to 20mb and swap will be ignored in this case.
memory=20mb & memory-swap=30mb : in this case, the hard limit for memory is 20mb and swap is 10mb.
If --memory-swap is explicitly set to -1, the container is allowed to use unlimited swap, up to the amount available on the host system.
OOM Errors - By default, if an out-of-memory (OOM) error occurs, the kernel kills processes in a container. We can use the flag “--oom-kill-disable” to change this behavior. If we set the memory and memory-swap, and when the OOM error occurs the kernel kills the processes inside the container.
If we set the parameter “--oom-kill-disable” along with container run, if the hard limit is reached, the process running inside the container will be killed and also the container will get hung. Care full in using the --oom-kill-disable as it can hang the containers and also our host linux prompts.
CPU Shares
CPU is also another important resource that needs to be used effectively with containers. By default each container can have unlimited CPU cycles from the host machine. We need to set various constraints to limit a given container access to the host machines CPU cycles. The default CFS ( Completely fair processing ) scheduler is the default scheduler in linux now and will be used in setting and assigning CPU to the containers. The flags available for setting the CPU for the docker run command is,
--cpus : how much of the available cpu resources can a container use. if the host machine has 2 cpus and if we set the --cpus = "1.5", it means the container can use one and half of the cpus available on host.
--cpu-period and --cpu-quota : these 2 flags need to be used alongside. These values define the CPU CFS scheduler period. if we are using docker version > 1.13, the --cpus is used.
--cpusets-cpus : limits the CPU cores that a container can use. if we set the value as [0-3], which means the container can use first,second, third and fourth CPU. The first CPU starts with Zero(0). If we set the value as 0,1 this means the container can use CPUs 0 and 1.
--cpu-shares : the value specifies the container weight to use the CPU. the default value if 1024, so a greater value than this gives greater priority and lesser values gives less priority.
CPU limits are based on shares. These shares are a weight between how much processing time one process should get compared to another. If a CPU is idle, then the process will use all the available resources. If a second process requires the CPU then the available CPU time will be shared based on the weighting.
Below is an example of starting a container with different shares. The --cpu-shares parameter defines a share between 0-768.
If a container defines a share of 768, while another defines a share of 256, the first container will have 75% share with the other having 25% of the available share total. These numbers are due to the weighting approach for CPU sharing instead of a fixed capacity.
docker run -d --name c768 --cpuset-cpus 0 --cpu-shares 768 benhall/stress
docker run -d --name c256 --cpuset-cpus 0 --cpu-shares 256 benhall/stress
sleep 5
docker stats --no-stream
docker rm -f c768 c256
It's important to note that a process can have 100% of the share, no matter defined weight, if no other processes are running.
--cpu-shares
--cpuset-cpus
--memory-reservation
--kernel-memory
--blkio-weight (block IO)
--device-read-iops
--device-write-iops
Disk I/O
Disk Read/Write is another resource that can be handled by the Container.By default Running container will have no restrictions on how many disk read/writes can be onde.We need to set various constraints to limit a given container access to the host machine Disk.
Docker provides us with the following parameters that we can pass with the run command to control the way Disk read/write and Disk IOPS can be handled.
--blkio-weight (block IO)
--device-read-iops
--device-write-iops
--device-write-bps
--device-read-bps
Lets run a container with disk read/write limitation set. First grab the disk available on your machine using the,
[root@ip-172-31-9-137 centos]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 30G 0 part /
Now run a container as below,
[root@ip-172-31-9-137 centos]# docker run -it --device-write-bps /dev/xvda:1mb centos
[root@c5a5a6651ca2 /]#
In the above container run command we specified a device-read-bps option to limit the read rate to 1mb per second for /dev/xvda1 device. Now run a dd command to create a file from inside the container as below,
dd if=/dev/zero of=afile bs=1M count=100
[root@ip-172-31-9-137 centos]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f165ae292cfe centos "/bin/bash" 9 minutes ago Up 9 minutes charming_davinci
[root@ip-172-31-9-137 centos]# docker exec f165ae292cfe dd if=/afile of=/dev/null
20480+0 records in
20480+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0429057 s, 244 MB/s
Limiting CPU
CPU is a very important resource other than memory and Disk. Allowing one container to use more CPU or container using less cpu can lead to changing performance behavior of the application running. Lets see how we can manage out docker container by using CPU efficiently
Limit Cores : Docker provides us a way to limit the number of cores available to containers by using the --cpus flag
Lock Container to Specific Core : Docker provides us a way to limit the number of cores available to the container. By default a docker container will use all the available cores for its use if available but if we want to lock our containers to a specific core , docker provides us the way.
Limit CPU time : Docker provides a way to limit the cpu time to ensure how often a process is able to interrupt the processor or a set of cores
Shares and Weights : Rather than assigning or limiting cpus and cores, we can apply shares to the containers. This allows more critical containers to have priority over the cpu when needed.
For example, if our host has 2 CPUs and wants to give a container access to one of them, we can run the container setting the --cpus=”1.0”. A container can run using
Docker run -it --cpus=”1.0” centos /bin/bash
As we already discussed, rather than giving a whole CPU to the container we can assign a share to increase or reduce the container weight. Using the “--cpu-shares” flag we can assign a value to a container greater or lesser than 1024 (default) to increase or decrease the container weight. This will give the container access to greater or lesser proportion of the host machine CPU cycles.
docker run -d --cpu-shares=1024 centos
Similar to the memory reservation, CPU shares play the main role when computing power is scarce and needs to be divided between competing processes.Hope this helps you in understanding the basic concepts of Docker resource utilization and management.
Thank you so much for sharing this post, I appreciate your work. It was a great informative post. Go so many useful and informative links. Loved your writings also.
ReplyDeleteAvg Antivirus Support
webroot Support
Alitalia Airlines Support
Thank you so much for sharing this post, I have read some of your stuff this past weekend . Personally, I loved your post. I appreciate your work. It was a great informative post. Your articles are useful and informative and links also. I feed glad to read your blog. keep It up.
ReplyDeleteIf you want to know more information about
Top benefits of having an online business Once you can check it.