
So what does the networking on the container look like? Well unfortunately I can’t do an ip addr show as ip doesn’t exist, but I know it is in the iproute package. So from the container now (notice the prompt change) let’s try…
[root@98c3dbff6afc /]# yum install iproute <...snip...> Installed: iproute.x86_64 0:3.10.0-74.el7 Dependency Installed: iptables.x86_64 0:1.4.21-17.el7 libmnl.x86_64 0:1.0.3-7.el7 libnetfilter_conntrack.x86_64 0:1.0.6-1.el7_3 libnfnetlink.x86_64 0:1.0.1-4.el7 Complete! [root@98c3dbff6afc /]#
This is already interesting, yes? I did absolutely nothing configuration-wise on either the host or the container, and I can install a package via yum from the internet in the container which is sitting behind my host’s networking. Again this is why we are taking a deeper look at this.
If I now use ip addr show, you can see the configuration on the container:
[root@98c3dbff6afc /]# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 scope global eth0 valid_lft forever preferred_lft forever
Again some interesting data points here. We seem to have located our interface #4, right? And notice that this also appears to be a veth and targeted @ if5. After a yum install ethtool we can verify that.
[root@98c3dbff6afc /]# ethtool -S eth0
NIC statistics:
peer_ifindex: 5
The IP on this interface is 172.17.0.2/16, which would be the next IP available in that container subnet. This is automatically assigned by the built-in Docker IPAM that we saw for this network.
And I can do an ip route show here as well:
[root@98c3dbff6afc /]# ip route show
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2
Nothing too surprising here except we do see that the default gateway is actually the bridge docker0 IP on the server. So now we can surmise that the network traffic for our yum installs is going from the container to the bridge on the host, and then out.
Next we’ll head back to the host and look at some other information on the network config.