Experimenting with LXC containers
At Our World In Data, we're a small team akin to a research group. Our infrastructure started pretty simple, just a few hand-managed Linux machines in DigitalOcean. We could get away with it since our publication is statically rendered on deploy, with Netlify solving any scalability needs for us.
Over time though, our machines have grown more and more complex, with multiple users installing things and setting up cron jobs that eventually became mission critical. Now we're in the process of teasing these things apart, and making it all more manageable again. This led me to reflect on our hosting options.
Would containerising our work simplify things and bring us benefits?
Docker: mixed experiences
Honestly, I have a love-hate relationship with Docker. Years ago when we first introduced it at 99designs, it was totally apparent that at that time it introduced as much complexity as it was solving, and swapped one set of problems we were having for another. At the same time, we had great trouble with local dev environments, and even putting on dev on stabilising and automating them for months did not yield good results.
Years later at Lifesum, the dev ops team built really nice autoscaling backend infrastructure, meaning that as a backend engineer you could throw together a new service and ship it easily. It was a kind of walled garden, but a really nice garden.
Today at Our World In Data we use Docker and docker-compose to make our dev environment repeatable. Within engineering, we are semi-comfortable with Docker. But it's not just engineers writing production code, it's also our team of data managers. Would it really be feasible to up-skill everyone and bring them along on a new Dockerization of all our work?
The investment seemed massive, and one that might throw away the flexibility, comfort and transparency that working on a live server can brings. What other options are there?
LXC: multi-process containers
Although Docker has moved on now, it was originally just some magic sauce on top of an existing containerisation framework in Linux: LXC containers.
Like FreeBSD jails, an LXC container is a controlled subset of the host system that appears like its own whole system. You run LXD (the container runtime) on top of Ubuntu, spin up a container, it appears to be its own new Ubuntu system.
Critics of this call it the "sysadmin" version of containers. You got a container, but it's just another Linux system to administer. A Docker container is more portable, you can ship it to registries and run in autoscaling cloud providers. That makes Docker containers the modern "dev ops" solution better suited to mass automation.
But, let's look at LXC for what it offers. You had one Linux machine. Now it can appear to be many, each managed independently, but sharing resources. How does that help?
Teasing data services apart
In our situation, we're trying to make our Linux servers more manageable. To do this, we want to separate concerns, and to automate where possible to make our servers more disposable.
LXC containers definitely let us separate concerns. If three people want to run different low-utilisation things on a machine, they can now appear to have three different machines.
But if we're running on a cloud provider like DigitalOcean, we already had this right?!? We could already be spinning up a new machine for every use case.
The reason we weren't doing this is that we have a lot of rarely run high-memory use cases. These are common with data work, but poorly matched to the pricing of cloud servers. Do you want to pay $100/month just to run something at midnight each day? This logic is how we ended up with far too many things crammed on one cloud machine. Using containers we can get better utilisation of fewer machines, and thus pay less.
The ultimate version of this, which we're experimenting with, is to run dedicated hardware in Hetzner which is way overspecced for our needs, but still way cheaper than cloud servers. Then it's very easy to run many LXC containers, each of them having abundant resources, but still having good separation between them.
Usability: Tailscale + automation
So now you can spin up your own Ubuntu machine, on some server, as an LXC container. But it doesn't really feel like a normal machine, since you have to access it in an awkward way. We use Tailscale to solve this.
Tailscale runs as a service on your local machine and on all your remote machines, and creates a shared network interface like a VPN between them. With Tailscale installed on an LXC container I called my-test-server
, I can now log into it with ssh my-test-server
, and Tailscale's DNS handling on my laptop will magically translate this into the right IP on the Tailscale network interface.
There is a definite performance penalty to connecting over Tailscale compared to over a direct connection, a 50% reduction in bandwidth is not abnormal. That would be killer for some use cases, but for us it's far outweighed by the convenience of having our servers stitched together well.
This begs the question of how you enforce that Tailscale gets installed on new containers, or that your SSH key is even there in the first place. You do this through automation.
A common setup script for every server plus some server management helper scripts ensure that any new server has all our SSH keys on it, has common firewall settings, and has Tailscale installed and enabled.
The final picture
In short, we're migrating from an all-hand-rolled server setup to a semi-automated one, where crucial services have fully-automated recipes to spin them up, but where it's also very easy for staff members to spin up a playground and do what they want there without incurring any extra cost to us.
The hosting for the new setup is on a beefy dedicated Hetzner server, which doesn't do anything except serve as an LXC host. Tailscale serves as the bridge between all our development machines, servers and LXC containers.
We think this will be a really good environment for doing scrappy, productive data work as well as building systems that we can rely on, the right amount of complexity for us for now.