Pages

Friday, June 5, 2020

Docker Performance with Native Tools


                             

Containers are changing the world and how organizations deploy and use software. We can run any software by just running the docker run command and the orchestration platforms like kubernetes make management and deployment of container applications easy. 

Since the containers from outside look like blackbox, it is very important to understand how containers are performing. Container monitoring tools now are increasingly utilized to do a performance analysis for containers. In this article, we will see how we can analyze the performance of a docker container using basic docker tools and native ways. We don’t need to use a container monitoring tool to grab the performance details of the container.
 

Grabbing the container performance metrics can be done by using 3 ways
Docker stats Command
Making a REST API call to docker socket
Using the Cgroup File system


Let's start grabbing details
Docker stats
Docker stats command is the best command to start with. The stats command displays a live data stream with CPU, Memory usage, Memory Limit, Block I/O and network I/O for all the running containers.
 

Run a container as below,
[root@ip-172-31-16-91]# docker run --rm -d progrium/stress --cpu 2 --io 1 --vm 2 --vm-bytes 128M --timeout 100s
[root@ip-172-31-16-91]# docker stats

As we can see above , docker stats give a live stream of all performance related details for all running containers. In the above output, we can see
 

Container ID : Container with ids
Name  : Container name
 

CPU% : displays the host capacity CPU utilization. The value is based
on the Number of container runnings and the amount of CPU
shares allocated to the containers. Check the Docker Resource
constraints document for more details on how cpu shares are
assigned.
 

MEM Usage/Limit and MEM% : displays the amount of memory used
by the container, container memory limit and container
utilization percentage. If the container is not set with
hard memory limits, the limit will show the full memory
available on the Host machine

NET I/O : this displays the total bytes received and transmitted over the
network by the current container. In the above example, the
container received 960 bytes of data and sent 0bytes of data. Block I/O : displays the total bytes written and read to the container
file system.
 

PIDS : displays the number of kernel process IDs running inside the
container.


Rest API
Rest calls are another way to grab the performance details of the running containers. The Docker daemon listens on the socket file unix:///var/run/docker.sock file which allows local connections by the root user.
 

Using the Rest api calls, we can get the more details regarding the performance of the data. Similar to “docker stats” command, the Rest call api also gives a large stream of live JSON data regarding the container metrics. The native tool “curl” will be used to access the performance metrics of the running containers by making a rest api call.

Let's start a container as below,
[root@ip-172-31-16-91 ec2-user]# docker run --rm -d progrium/stress --cpu 2 --io 1 --vm 2 --vm-bytes 128M --timeout 50s
fd645b22779cc4c7535e660cf94c7c5b6bdf3ba4705aac692dac0a502120f2f6d

Grab the Container ID,
[root@ip-172-31-16-91 ec2-user]# docker ps -q
fd645b22779c

Now make a rest call to the unix socket file /var/run/docker.sock as below by passing the container ID
[root@ip-172-31-16-91 ec2-user]# curl -v --unix-socket /var/run/docker.sock http://localhost/containers/fd645b22779c/stats
* Trying /var/run/docker.sock...
* Connected to localhost (/var/run/docker.sock) port 80 (#0)
> GET /containers/fd645b22779c/stats HTTP/1.1
> Host: localhost
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Api-Version: 1.40
< Docker-Experimental: false
< Ostype: linux
< Server: Docker/19.03.6-ce (linux)
< Date: Fri, 05 Jun 2020 02:51:06 GMT
< Transfer-Encoding: chunked
<
{"read":"2020-06-05T02:51:07.463570402Z","preread":"0001-01-01T00:00:00Z","pids_stats":{"current":6},"blkio_stats":{"io_service_bytes_recursive":[],"io_serviced_recursive":[],"io_queue_recursive":[],"io_service_time_recursive":[],"io_wait_time_recursive":[],"io_merged_recursive":[],"io_time_recursive":[],"sectors_recursive":[]},"num_procs":0,"storage_stats":{},"cpu_stats":{"cpu_usage":{"total_usage":13035056292,"percpu_usage":[13035056292,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"usage_in_kernelmode":6170000000,

"usage_in_usermode":6880000000},"system_cpu_usage":75640860000000,"online_cpus":1,"throttling_data":{"periods":0,"throttled_periods":0,"throttled_time":0}},"precpu_stats":{"cpu_usage":{"total_usage":0,"usage_in_kernelmode":0,"usage_in_usermode":0},"throttling_data":{"periods":0,"throttled_periods":0,"throttled_time":0}},"memory_stats":{"usage":58851328,"max_usage":270299136,"stats":{"active_anon":57794560,"active_file":0,"cache":0,"dirty":0,"hierarchical_memory_limit":9223372036854771712,
"hierarchical_memsw_limit":9223372036854771712,"inactive_anon":0,"inactive_file":0,
"mapped_file":0,"pgfault":2310551,"pgmajfault":0,"pgpgin":2310081,"pgpgout":2295969,"rss":57802752,"rss_huge":0,
"total_active_anon":57794560,"total_active_file":0,"total_cache":0,"total_dirty":0,"total_inactive_anon":0,"total_inactive_file":0,"total_mapped_file":0,
"total_pgfault":2310551,"total_pgmajfault":0,"total_pgpgin":2310081,"total_pgpgout":2295969,"total_rss":57802752,"total_rss_huge":0,"total_unevictable":0,
"total_writeback":0,"unevictable":0,"writeback":0},"limit":1031114752},"name":"/distracted_noether","id":"fd645b22779cc4c7535e660cf94c7c5b6bdf3ba4705aac692dac0a502120f2f6",
"networks":{"eth0":{"rx_bytes":766,"rx_packets":9,"rx_errors":0,"rx_dropped":0,"tx_bytes":0,
"tx_packets":0,"tx_errors":0,"tx_dropped":0}}}
**********
Use any JSON pretty printer to display the above content in more human readable format. But making a Rest call to the Socket file give a lot of live stream of data regarding the running containers. In the above output, the json content under “blkio_stats” talks about the I/O operations, “cpu_stats” talks about the cpu metrics, “memory_stats” talks about the memory details and finally “network” talks about the network statistics.

Using the Cgroups file System
The final way is the simplest way of grabbing the running container details. Docker uses a Cgroup file system /sys/fs/cgroup which contain many useful details about the containers.

Let's start a container as below,
[root@ip-172-31-16-91 ec2-user]# docker run --rm -d progrium/stress --cpu 2 --io 1 --vm 2 --vm-bytes 128M --timeout 50s
fd645b22779cc4c7535e660cf94c7c5b6bdf3ba4705aac692dac0a502120f2f6d

Grab the Container ID,
[root@ip-172-31-16-91 ec2-user]# docker ps -q
1ec81a621fde

Now move to the cgroup directory as below, /sys/fs/cgroup/memory/docker and here we will see a directory starting with the Container id. Inside that directory we can see many files as below,
[root@ip-172-31-16-91 1ec81a621fde57e78da026e0720ccc092372ae1b8e2c6b6f24e8149364a9d9d9]# ll
total 0
cgroup.clone_children
cgroup.event_control
cgroup.procs
memory.failcnt
memory.numa_stat

If we check the memory.stat file we can see many details as below,
[root@ip-172-31-16-91 3a6d5934963b0e1ccb1dd7f15b42784ca47f4c4c98300091ddf607052d90251a]# cat memory.stat
cache 0
rss 239439872
rss_huge 0
shmem 0
mapped_file 0
dirty 0
writeback 0
swap 0
pgpgin 2878698
pgpgout 2820241
pgfault 2879162
pgmajfault 0
inactive_anon 0
active_anon 239439872
inactive_file 0
active_file 0
unevictable 0
hierarchical_memory_limit 9223372036854771712
hierarchical_memsw_limit 9223372036854771712
total_cache 0
total_rss 239439872
total_rss_huge 0
total_shmem 0
total_mapped_file 0
total_dirty 0
total_writeback 0
total_swap 0
total_pgpgin 2878698
total_pgpgout 2820241
total_pgfault 2879162
total_pgmajfault 0
total_inactive_anon 0
total_active_anon 239439872
total_inactive_file 0
total_active_file 0
total_unevictable 0

Whenever a container starts, a directory will be created under /sys/fs/cgroup and under a specific resource. Under the /memory/docker/ we will see all details regarding that container memory usage, and under /cpuacct/docker/ we can see all details regarding the container cpu usage details. This is the simplest way to grab the container performance metrics for all container running. Hope this helps you in getting started to grab the container performance details.

No comments :

Post a Comment