Pages

Friday, January 7, 2022

Docker Networking - Macvlan

While working with Docker Containers, we have seen how we can use network types provided by Docker from NONE to BRIDGE. But did we ever try to configure a Docker container or run a container connected directly to the underlying physical network?


Why do we need to connect an application or Container directly to the Underlying network?. There can be legacy applications where they have to connect to the underlying physical network for faster routing of requests. If we want the same legacy application to be moved to a container, we need a way to let the container connect to the physical network rather than a Vlan provided by Docker dockerO bridge network.


Docker provides a macvlan network type which satisfies the need. When we use the macvlan network driver it assigns a MAC address to each container virtual network interface making it appear to be a physical network interface directly connected to the physical network. The advantage is that the latency in macvlan networks is low since packets are routed directly from the docker host network interface to containers. In this article we will have how we can use a Macvlan network driver and attach to a container.


Check if Macvlan is available on the host machine or not?

[root@ip-172-31-35-163 foo]# lsmod | grep macv


Check the existing docker networks available

[root@ip-172-31-35-163 foo]# docker network ls

NETWORK ID     NAME      DRIVER    SCOPE

48909e557f1d   bridge    bridge    local

fdf372ca609e   host      host      local

09e36b50cb20   none      null      local


There is no macvlan network interface available. Check the network interfaces available using the ifconfig command,

[root@ip-172-31-35-163 foo]# ifconfig

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500

   inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

   inet6 fe80::42:ccff:fe5d:434a  prefixlen 64  scopeid 0x20<link>

        ether 02:42:cc:5d:43:4a  txqueuelen 0  (Ethernet)

        RX packets 3727  bytes 201737 (197.0 KiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 4947  bytes 39337043 (37.5 MiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001

  inet 172.31.35.163  netmask 255.255.240.0  broadcast 172.31.47.255

  inet6 fe80::894:27ff:fe0e:7f73  prefixlen 64  scopeid 0x20<link>

        ether 0a:94:27:0e:7f:73  txqueuelen 1000  (Ethernet)

        RX packets 442113  bytes 446230278 (425.5 MiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 191497  bytes 25180642 (24.0 MiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


Create a macvlan network driver using the docker network command as below,

[root@ip-172-31-35-163 foo]# docker network create -d macvlan --subnet=172.31.35.163/24 --gateway=172.31.35.1 -o parent=eth0 mac_net


7baebe9803f4a8ccd87f1a5dfeb783305e5247eaa811865a245bb7f953eb93fc


Check the docker network available now?


One important thing about this network mode is that we will be able to connect to other systems on the local network without any issues but the container will not be able to connect to the host and vice versa. This is a limitation of the macvlan interface and without any special support from the network switch the host will not be able to send packets to the macvlan interfaces. The workaround is to create another macvlan interface on your host and use that to communicate with containers.


Run the below commands to provide special support for the macvlan driver we created on the physical network switch ( basically creating a copy of the physical network interface )


[root@ip-172-31-35-163 foo]# ip link add mac0 link eth0 type macvlan mode bridge


[root@ip-172-31-35-163 foo]# ip addr add 172.31.35.163/24 dev mac0


[root@ip-172-31-35-163 foo]# ifconfig mac0 up


Now check the ifconfig command to see a new physical interface as below,

[root@ip-172-31-35-163 foo]# ifconfig

mac0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001

  inet 172.31.35.163  netmask 255.255.255.0  broadcast 0.0.0.0

  inet6 fe80::60a2:39ff:fe53:d37e  prefixlen 64  scopeid 0x20<link>

        ether 62:a2:39:53:d3:7e  txqueuelen 1000  (Ethernet)

        RX packets 0  bytes 0 (0.0 B)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 5  bytes 430 (430.0 B)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


Note : 
To Delete the new network interface : ip link delete mac0

Now lets run a container with the macvlan driver created as below,

[root@ip-172-31-35-163 foo]# docker run --net=mac_net -d --ip=172.31.35.49 -p 81:80 nginx


Unable to find image 'nginx:latest' locally

latest: Pulling from library/nginx

eff15d958d66: Pull complete

1e5351450a59: Pull complete

2df63e6ce2be: Pull complete

9171c7ae368c: Pull complete

020f975acd28: Pull complete

266f639b35ad: Pull complete

Digest: sha256:097c3a0913d7e3a5b01b6c685a60c03632fc7a2b50bc8e35bcaa3691d788226

Status: Downloaded newer image for nginx:latest

c75b50c4370c013be1ffa8ebc97175dee7c06fa0e49d499ad3e306e555e76b7c


We have created a container with the macvlan driver created above (mac_net) with ip address 172.31.35.49. Now once the container is up and running, we can run the curl command directly on the ip address to see the nginx page as below

[root@ip-172-31-35-163 foo]# curl 172.31.35.49

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

<style>

html { color-scheme: light dark; }

body { width: 35em; margin: 0 auto;

font-family: Tahoma, Verdana, Arial, sans-serif; }

</style>

</head>

<body>

<h1>Welcome to nginx!</h1>

<p>If you see this page, the nginx web server is successfully installed and

working. Further configuration is required.</p>


<p>For online documentation and support please refer to

<a href="http://nginx.org/">nginx.org</a>.<br/>

Commercial support is available at

<a href="http://nginx.com/">nginx.com</a>.</p>


<p><em>Thank you for using nginx.</em></p>

</body>

</html>


Hope this Helps in Understanding the Macvlan Driver in Docker. More to Come

No comments :

Post a Comment