Exploring Docker Networking – Host, None, and MACVLAN

whale20logo332_5

As with our previous post on bridge, the intent here is to explore the remaining standard (mostly non-swarm) network configurations.  Those configurations are host, none, and the newer MACVLAN.

Unlike bridge these are non-default configurations which means you must manually specify the network somewhere.  In our case we are just going to explicitly list the network when we run the container.  We will also explore the configuration elements that we looked at earlier.  

My host is configured identically as before with a very basic standard config, some extra inspection tools (publicly available via yum) and a symlink that lets us easily see the network namespaces established with the following command:

$ ln -s /var/run/docker/netns/ /var/run/netns

Again I wouldn’t put this link into place in production as it likely won’t help anything but it helps to examine some internal items for the purpose of this blog post.

My host has the standard networks available to it that it did before:

[root@dockernet2 ~]# docker network ls
NETWORK ID   NAME   DRIVER SCOPE
b31f10c1052e bridge bridge local
f2f1cf53c9f2 host   host   local
b0cf5ba8b44d none   null   local

Host

The first configuration we will explore is host.  Host networking is pretty straightforward in that your container literally shares the network configuration of the server it runs on.  Essentially there is no network namespace configured for your container to segment it out.  If this is confusing, check out the examples and you will see it is pretty straight forward.

Inspecting the host network gives us:

[root@dockernet2 ~]# docker network inspect host
[
 {
 "Name": "host",
 "Id": "f2f1cf53c9f2c27a835fe3ae0366ba3fff800699cf9838b74256fd9872dfec8a",
 "Created": "2017-08-01T16:20:18.753047939-04:00",
 "Scope": "local",
 "Driver": "host",
 "EnableIPv6": false,
 "IPAM": {
 "Driver": "default",
 "Options": null,
 "Config": []
 },
 "Internal": false,
 "Attachable": false,
 "Ingress": false,
 "ConfigFrom": {
 "Network": ""
 },
 "ConfigOnly": false,
 "Containers": {},
 "Options": {},
 "Labels": {}
 }
]

The main difference from bridge is there is no address information here.  No gateway, no subnet under the IPAM settings.  This is because the container will not be running on its own internal network so there is nothing for docker to manage.

Let’s spin up a CentOS container using the host driver and then check our networking out.

[root@dockernet2 ~]# docker run -dit --network=host centos
b4f3467df1e2253be7a279345566722243ba437fb01d5b4e929a6e686bc44648
[root@dockernet2 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 00:50:56:82:e0:b4 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.206/24 brd 10.0.0.255 scope global ens192
 valid_lft forever preferred_lft forever
 inet6 fe80::250:56ff:fe82:e0b4/64 scope link
 valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
 link/ether 02:42:c1:57:d2:6d brd ff:ff:ff:ff:ff:ff
 inet 172.17.0.1/16 scope global docker0
 valid_lft forever preferred_lft forever

Notice that we have created the container but the bridge docker0 is down.  This is because nothing got attached to it.

[root@dockernet2 ~]# brctl show
bridge name bridge id         STP enabled    interfaces
docker0     8000.0242c157d26d no

See, no interfaces in the last column? Nothing is attached to the bridge, which we expect.

If I attach to the container, install iproute and run ip addr show we’ll see something interesting, but first check out the attach:

[root@dockernet2 ~]# docker container ls
CONTAINER ID IMAGE   COMMAND    CREATED       STATUS       PORTS NAMES
b4f3467df1e2 centos "/bin/bash" 3 minutes ago Up 3 minutes       vigorous_feynman
[root@dockernet2 ~]# docker attach vigorous_feynman
[root@dockernet2 /]#

Huh?  If you’ve attached to a container before this might look weird because the prompt didn’t change, or at least didn’t change in the same way.  I highlighted the difference showing that we were in root’s home directory (~) and now we are at the root of a file system (/).  Usually we get a “root@982klkjlsjd” prompt right?  This is because as I explained the host driver shares the complete network stack of the host.  There is no difference in interfaces, IP addresses, or even the hostname which is what is showing on the prompt.  Here are two containers which should show the difference:

#container running on bridge driver
[root@99b390685537 /]# hostname
99b390685537

#container running on host driver
[root@dockernet2 /]# hostname
dockernet2

Now let’s look at the interfaces inside the container and you’ll see something familiar.

[root@dockernet2 /]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 00:50:56:82:e0:b4 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.206/24 brd 10.0.0.255 scope global ens192
 valid_lft forever preferred_lft forever
 inet6 fe80::250:56ff:fe82:e0b4/64 scope link
 valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
 link/ether 02:42:c1:57:d2:6d brd ff:ff:ff:ff:ff:ff
 inet 172.17.0.1/16 scope global docker0
 valid_lft forever preferred_lft forever

Yep if you compare this to the output from the host, it looks just the same.  Let’s inspect the container and look at part of the network namespace:

 "NetworkSettings": {
 "Bridge": "",
 "SandboxID": "78859d49ce6fc8b11dc45067720dfa5bf4ca2d9cd0e2da4330fbffeee85c9ec5",
 "HairpinMode": false,
 "LinkLocalIPv6Address": "",
 "LinkLocalIPv6PrefixLen": 0,
 "Ports": {},
 "SandboxKey": "/var/run/docker/netns/default",

This uses a ‘default’ network namespace.  If we exec against it:

[root@dockernet2 ~]# ip netns exec default ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 00:50:56:82:e0:b4 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.206/24 brd 10.0.0.255 scope global ens192
 valid_lft forever preferred_lft forever
 inet6 fe80::250:56ff:fe82:e0b4/64 scope link
 valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
 link/ether 02:42:c1:57:d2:6d brd ff:ff:ff:ff:ff:ff
 inet 172.17.0.1/16 scope global docker0
 valid_lft forever preferred_lft forever

Again we see the same output from the host.  And in fact if you continue to spin up containers on the Host network, you’ll see they are all part of the default network namespace.

One more time, let’s look at a poorly drawn diagram:

hostnetworking

Hopefully this makes it a little clearer.  Every new container using the host driver will share the exact same network config as the server it is running on, instead of having its own configuration in a separate network namespace.

Now, what are the meaningful differences here when comparing to bridge?

Well for sure it is simpler.  There are less parts and pieces in general.  My terrible drawing here is smaller and has less objects than my terrible drawing from the bridge side.  However this is definitely a case where simpler is not always better.

In host mode, every single interface (including the docker0 bridge for bridge mode) is exposed to every container.  Whether it is meaningful for the container to operate or not.  This will likely range from “who cares” to “no freaking way” from a security standpoint.

Also in host mode, you are leveraging the server interface directly so there likely will be some sort of performance improvement on the network traffic (i.e. closer to bare metal).  Whether this is meaningful is questionable but there are probably corner cases where you are looking for speed at all costs.

We haven’t addressed NAT yet (again, too many topics not enough writing time), but bridging leverages NAT for container-to-outside-world traffic.  Because the host driver leverages the server’s own network, there is no NAT required beyond any NAT that the server would normally be exposed to.  Again regarding bridge NAT, I reiterate this is for a container trying to communicate with something outside of the host it is running on, not for another container on the same host.

Finally and likely the most important is port allocation.  Take web servers as an example.  We all know that we can’t run multiple web servers on the same host at port 80.  Just the same, we can’t run multiple web server containers in host mode on port 80 either, because they will use the host’s networking stack.  We will have the same port conflict situation either way.  With bridge mode we can do this so long as we aren’t exposing the same port through the host network from multiple containers.  This might seem confusing if you are new to Docker and I’ll promise to write more on the subject, but in the meantime let’s just condense it down to this general premise: because containers allow you to get massive consolidation through running many containers on one OS, host network mode is probably going to cause you way more pain compared to any performance gain you might achieve.

Host mode is a little more like running multiple applications on a host than it is multiple containers.  Not entirely, as the containers are still there, but from a network perspective.

Good?  Good.  On to a simpler topic, the “none” driver.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s