Behind the scenes with Linux containers
by Seth Kenlon
Can you have Linux containers without Docker? without OpenShift? without Kubernetes?
Yes, you can. Years before Docker made containers a household term (if you live in a data center, that is), the
lxc
project was developing the concept of running a kind of virtual operating system, sharing the same kernel, but *contained* within defined groups of processes.
Docker originally built on top of the lxc, and today there are plenty of platforms that leverage the work of lxc both directly and indirectly. Most of these platforms make creating and maintaining containers sublimely simple, and for large deployments it makes sense to use such specialized services. However, not everyone's managing a large deployment or has access to big services in order to learn about containerization. The good news is that you can create, use, and learn containers with nothing more than a PC running Linux and this article.
Sidestepping the simplicity
If you're looking for a quick start guide to lxc, stop reading this article and go to
linuxcontainers.org
. This article gives you a scenic tour of lxc, and its goal is to help you better understand containers so that when you use them in real life, you know how to troubleshoot.
Installing lxc
If it's not already installed, you can install
lxc
with your package manager.
On Debian, Ubuntu, and similar:
On Fedora or similar:
$ sudo dnf install lxc lxc-templates lxc-doc
Creating a network bridge
Most containers assume that a network is going to be available, and most of the tools around containers expect the user to have the ability to create some number of virtual network devices. The most basic unit required for containers is the network bridge, which is, more or less, the software equivalent of a network switch. A network switch is a little like a smart Y-adapter that you use to split a headphone jack so two people can hear the same thing with two separate headsets, except instead of an audio signal, a switch bridges network data.
You can create your own software network bridge such that your host computer as well as your container OS can both send and receive different network data over what is actually one network device (either your ethernet port or your wireless card). This is an important concept that often gets lost once you graduate from generating containers manually, because no matter the size of your deployment, it's highly unlikely that you have a dedicated physical network card for each container you'll be running. It's vital to understand that containers talk to virtual network devices, so that should a container lose its network connection, you know where to start troubleshooting.
To create a network bridge on your machine, you must have the appropriate permissions. For this article, use the `sudo` command to operate with root privileges (lxc docs provide a configuration to grant users permission to do this without the use of sudo).
$ sudo ip link add br0 type bridge
Verify that the imaginary network interface has been created:
$ sudo ip addr show br0
7: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc
noop state DOWN group default qlen 1000
link/ether 26:fa:21:5f:cf:99 brd ff:ff:ff:ff:ff:ff
Since br0 is seen as a network interface, it requires its own IP address. Choose a valid local IP address that does not conflict with any existing IP address on your network, and assign it to the br0 device:
$ sudo ip addr add 192.168.168.168 dev br0
And finally, ensure that br0 is up and running:
$ sudo ip link set br0 up
Container config
The config file for an lxc container can be as complex as it needs to be for you to define a given container's place in both your network and the host system, but for this example the config is simple. Create a file in your favourite text editor and define a name for the container and the network required settings:
lxc.utsname = opensourcedotcom
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 4a:49:43:49:79:bd
lxc.network.ipv4 = 192.168.168.1/24
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596
Save this file in your home directory as `mycontainer.conf`.
The `lxc.utsname` is arbitrary. You can call your container whatever you like; it's the name you'll use when starting and stopping the container.
The network type is set to `veth`, which is a kind of virtual ethernet patch cable. The idea is that the veth connection goes from the container to the bridge device, which is defined by the `lxc.network.link` property, set to `br0`. The IP address for the container is in the same network as the bridge device, but unique to avoid any collisions.
With the exception of the `veth` network type and the `up` network flag, all values in the config file are invented by you. The list of properties are available from `man lxc.container.conf` (if this is missing on your system, check your package manager for separate lxc decumentation packages). There are several example config files located in `/usr/share/doc/lxc/examples`, which you should review later.
Container shell
At this point, you're a two-thirds of the way to an operable container: you have the network infrastructure and you've installed the imaginary network cards in an imaginary PC. All you need now is to install an OS.
However, even at this stage you can see lxc at work by launching just a shell within a container space:
$ sudo lxc-execute --name basic \
--rcfile ~/mycontainer.conf /bin/bash \
--logfile mycontainer.log
#
In this very bare container, have a look at your current network configuration. It should look familiar, yet unique, to you:
# /usr/sbin/ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state [...]
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
[...]
22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> [...] qlen 1000
link/ether 4a:49:43:49:79:bd brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.168.167/24 brd 192.168.168.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 2003:db8:1:0:214:1234:fe0b:3596/64 scope global
valid_lft forever preferred_lft forever
[...]
Your container is aware of its fake network infrastructure, and also of a familiar-yet-unique kernel:
# uname -av
Linux opensourcedotcom 4.18.13-100.fc27.x86_64 #1 SMP Wed Oct 10 18:34:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Use the `exit` command to leave the container:
Container OS
Building out a full containerized environment is a lot more complex than the networking and config steps, so you can borrow a container template from lxc. If you don't have any templates, look for a separate lxc template package in your software repository.
The default lxc templates are available in `/usr/share/lxc/templates`:
$ ls -m /usr/share/lxc/templates/
lxc-alpine, lxc-altlinux, lxc-archlinux, lxc-busybox, lxc-centos, lxc-cirros, lxc-debian, lxc-download, lxc-fedora, lxc-gentoo, lxc-openmandriva, lxc-opensuse, lxc-oracle, lxc-plamo, lxc-slackware, lxc-sparclinux, lxc-sshd, lxc-ubuntu, lxc-ubuntu-cloud
Pick your favourite, and then create the container. This example uses Slackware:
$ sudo lxc-create --name slackware --template slackware
Watching a template being executed is almost as educational as building one from scratch; it's very verbose, and you can see that lxc-create has set the "root" of the container to `/var/lib/lxc/slackware/rootfs`, and that several packages are being downloaded and installed to that directory.
Reading through the template files give you an even better idea of what's involved: lxc sets up a minimal device tree, common spool files, an fstab, init files, and so on. It also prevents some services that make no sense in a container, like udev for hardware detection, from starting. Since the templates cover a wide spectrum of typical Linux configurations, if you do intend to design your own, it's wise to base your work on an existing template closest to what you're setting up; otherwise, you're sure to make errors of omission, if nothing else, that the lxc project has already stumbled over and accounted for.
Once the minimal OS environment has been installed, you can start your container:
$ sudo lxc-start --name slackware \
--rcfile ~/mycontainer.conf
You have started the container, but you have not attached to it (unlike the previous basic example, you're not just running a shell this time, but a containerized operating system). Attach to it by name:
$ sudo lxc-attach --name slackware
#
Check that the IP address of your environment matches the one in your config file:
# /usr/sbin/ip addr show | grep eth
34: eth0@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 [...] 1000
link/ether 4a:49:43:49:79:bd brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.168.167/24 brd 192.168.168.255 scope global eth0
Exit the container, and shut it down:
# exit
$ sudo lxc-stop slackware
Real-world containers with lxc
Of course, in real life, lxc makes it easy to create and run safe and secure containers. Containers have come a long way since the introduction of lxc in 2008, so use the expertise of the lxc developers to your advantage.
The process is simple, so follow the instructions on
linuxcontainers.org/lxc/getting-started
(), but hopefully this tour of the manual side of things has helped explain what's going on behind the scenes.