HACKER Q&A
📣 andrewstuart

I need to learn to love containers – HN please show me the good side?


I played with containers when Docker first came out and I hated them with a passion. It seems to me that they were horribly complex and replicated a whole bunch of operating system functionality and just made things hard hard hard, not to mention needing to implement clunky and fat layer file systems as well as constant fiddling with ports and configuration - ugh the whole thing seemed a truly ugly mess that sent complexity through the roof.

But I need to distribute some server software and my understanding is these days one of the expected ways to do that is to deliver some sort of containerised package.

The whole thing will be easier for me to swallow if I can come to love containers.

Can someone who loves containers share the love please? Help me understand why I should love containers instead of seeing them as the ugly pile of unnecessary mess that I currently see.


  👤 rektide Accepted Answer ✓
don't try to love em. mostly just try not to get so wound up. resist the urge to form stromg opinions or to crave what you know. "scout mindset". just observe, try, learn. but here's some things about containers:

it's better than vm's. easier to manage, better extends the host rather than replacing it with a bunch of very non-uniform vm's.

it's nice having an abstraction to run & manage stuff, without having to learn to manage a bunch of particular languages or runtimes specifically. each container makes it's own tools true but there's still some general operations in common.

containers go nicely with "12 Factor" development (http://12factor.net). shared so standard baseline assumotions that should fit most jobs.

being able to snapshot & re-spaen new instances of containers is cool but almost never used.

I'm pretty not excited by containers. but we were doing a pretty weird hodgepodge of things before & a lot of it was kind of bogus but more so, just that there were so many competing styles. it was amateur. containers aren't super awesome but they're ok. they work pretty decently, and they make good use of a lot of really solid powerful kernel features that you should want to be using (croups).


👤 oftenwrong
You're right: containers and the container ecosystem are complicated and a bit messy.

However, containers solve real problems; real problems that you are going to want to solve one way or another, whether you choose to use containers or not.

For example:

1. making sure your application is run with the correct dependencies

2. keeping your application (somewhat) isolated from other applications on the same host

3. packaging your application in a way that is widely compatible

You don't have to use containers, but you will gain a lot of insight from understanding the problems they solve well, and where their rough edges are.

Additionally, some of the things that seem ugly are, in some cases, necessary, or design decisions that do make sense in context.

For example, the use of a layered filesystem. If I am building my application on a given base image, then when I deploy a new version of my application, the host only needs to pull the top layers that contain my application. If I have many applications on the same base layers, that amplifies the savings on pulls and storage. Furthermore, a running container can perform all of its writes on a layer, which can just be discarded when the container exits. There is no need to make a copy of the image FS for the container, which saves on startup time. The layered filesystem approach has a lot of benefits. The cost (downside) is having to understand the additional complexity of layers, and having to structure the layers in such a way as to avoid things like excessively large images.

As for love, don't place too much importance on it... in the context of technology, that is. To quote the ever-relevant Choose Boring Technology:

"...you should probably be using the tool that you hate the most. You hate it because you know the most about it."

http://boringtechnology.club/


👤 freehrtradical
I share some of your trepidation but I've learned to like containers for some uses cases. In particular -- and I think this might help you have fun with learning -- I like one-liners. For example, the following downloads a "virtual machine" with a Java development kit in it, compiles a program and runs it:

  docker run --rm adoptopenjdk/openjdk11-openj9:ubi sh -c "printf 'public class main { public static void main(String... args) throws Throwable { System.out.println(\"Hello World\"); } }' > main.java && javac main.java && java main"
This is a trivial example, but I think it gets at the value of containers which is to make things self-contained.

From here on, it does get increasingly complicated and sometimes questionably so. You'll normally create a Dockerfile, add all your commands there, add separate files which you'll copy into the container. Then you might use compose, Kubernetes, nomad, or a million other technologies to actually make something more realistic and there are lots of annoyances of those.

But I think the base value for me is the idea of scripting the creation and deployment of self-contained "virtual machines". Arguably, this was done before with Ansible, Chef, etc., but there is some value to a popular, standardized approach and a huge library of images out on Docker Hub that you can simply run/pull. This is particularly useful for reproducing problems for other people to investigate, sharing toy projects, letting people explore your application, etc.


👤 vbsteven
1) I love containers when I’m researching/prototyping potential solutions for a problem. Using containers I can quickly spin up an isolated version of the software without having to mess with build systems, package managers, dependencies etc. Usually a “docker run” with some flags to set environment variables and expose ports is enough. And I can easily remove it from my system after the experiment is done.

2) I love docker-compose for describing dependencies during development. Things like databases, caches, message queues. Just one docker-compose.yml file in the project root describing the containers and a “docker-compose up” and I have a full environment for that project up and running.

3) Containers can help reproduce complex build environments. Let’s say an Android app with specific versions of Android Sdk, Ndk, cmake, swig. Package the necessary toolchain versions in a container and any team member can reproduce the same build regardless of what OS they are running natively. And if you have the build env in a container then CI tools can use it as well.


👤 yuppie_scum
No more “it worked on my machine”

👤 2rsf
Easy, stable, predictable and maintainable distribution and installation that leads the a more stable execution (in theory).


👤 __d
I'm not a passionate container person, but let me try: the basic upside is isolation/containment.

For a moderately complicated application, there'll generally be several backend processes. A database, perhaps a message queue, several application processes. To install and run the application, you would need to eg. install one or more package repositories, install a bunch of packages, edit various configuration files, create various users, and then arrange for everything to start (and stop) in the right order. That's kinda painful to set up, and even worse to tear down.

There are/were two popular solutions: automation, and containers.

Automation addresses the grunt work: you script the process of installing and configuring (and later removing) all the bits of this application. Where automation falls short is that it doesn't have the isolation/containment: applications still exist in the same namespaces, and can collide.

A container can package all of this stuff into a single blob, and you can run it by giving it a simple port mapping, and a simple storage mount, and ... it just works. It doesn't matter if you have a different application with a different version of the database server running on the same host. Or a different version of, eg. Python. You don't have to mess with unique users or groups, and it's trivial to map the externally visible ports where it suits you.

That was the first-phase pitch. And it made sense to people whose life was spent dealing with the complexities of making things work together.

Things then got less clear cut. So far, this was just a per-machine thing. The obvious next step was to scale "containers" out to get their benefits, but using a whole fleet of hosts, not just one. Welcome to orchestration.

The benefits are still the same: you spin up a bunch of containers, and they are all self-contained, isolated, it-just-works bundles. The orchestration framework deals with how many of each of them you need, putting them in the right places, and hooking things up together. It's *way* less work than doing it by hand, and especially when you do dynamic scaling or failover or config changes.

But ... then containers and orchestration frameworks became kinda the expected way to do things, even when it wasn't really necessary. A simple, self-contained app doesn't need a container, and many things don't need full-blown Kubernetes.

Containers are good, but only if they address a problem you have.

So far as software distribution goes: if your installation instructions are more than a couple of lines, then you might as well put together a Dockerfile, and make your customers' lives easy. And be quietly thankful that we're not distributing VMware images any more :-)