HACKER Q&A
📣 spencerchubb

What Is the Hype with Docker?


I have used Docker more than almost any other tool, and I have genuinely given it a fair shot. After all of that, I still don't understand the appeal.

Docker nearly seems to be an industry standard by now. Some people treat it like an obvious choice, but it's not so obvious to me.

People say that Docker can run anywhere. It solves the infamous problem of "works on my machine". Despite what people claim, I have not found this to be the case. I developed containers on windows and then I still had to debug the containers when deploying on Linux. There were formatting issues, filenotfound issues, and chmod issues. I have spent so much time configuring docker, and been able to complete the same task in a VM in a fraction of the time.

Am I alone here? Am I doing it wrong? Is it the case that I am not the intended audience, and it's meant for larger teams?


  👤 hnaccountme Accepted Answer ✓
Real problem is low skilled developers. Most people working in tech these days have no idea how anything works. Everyone just hacks at the problem until they get an output. There is no real engineering anymore. Tool like docker just try to mimic what the big tech companies are doing and hope for similar outputs.

Most people who use docker don't know how much computing power and network usage they are wasting. Companies dont care as long as they can show some revenue.

I can grantee 99% of projects using docker or any sort of containerization are wasting 99% of compute resources.

Just waiting for those low skilled people to reply how wonderful docker is and to say "if its not working you are not doing it right "


👤 dgrin91
Docker has a bunch of pros and cons, but to me the most important thing is that it standardizes a 'good enough' way to deploy servers. That means I can have one dev write some docker files used to deploy a server, then later switch to a different dev and the other dev should be able to easily figure it out. Contrast that to before when people had custom scripts and VMs and you would always need to spend time digging in to figure out what is actually happening and whats breaking.

Its not perfect, but its good enough and there is a lot of support for it, so its easy to plug into my team.


👤 dimgl
> I developed containers on windows and then I still had to debug the containers when deploying on Linux.

I basically avoid Windows altogether when doing development now. Unless it's for video game development, which even then I wish I didn't have to use Windows.

Dockers containers are standard for a reason. In fact I can't imagine not using Docker as a daily driver nowadays. Within seconds I can spin up a sterile container for my development environment. Or application. Or a data store. Or a queue. And it's all isolated and seamlessly communicates with each other. I'm now even compiling Unreal Engine servers on Windows and deploying them on Linux machines using Docker containers.

Are you using Docker Compose by the way?


👤 aristofun
I guess it wasn't meant to run on Windows. And linux people meant "run anywhere as long as it's some linux".

Which is good enough for me. Considering how it is easy to install, distribute and run and manage apps in Docker vs bare metal setting up and fighting issues in complex setups until your eyes are red.


👤 foverzar
Hype? Maybe 5 years ago, but nowadays?

> I developed containers on windows and then I still had to debug the containers when deploying on Linux. There were formatting issues, filenotfound issues, and chmod issues

You probably mean "building", not "deploying". If you've built it on windows, the container image will be the same on linux.

> I have spent so much time configuring docker, and been able to complete the same task in a VM in a fraction of the time.

Now imagine spending this time over and over and over again.

> Am I doing it wrong? Is it the case that I am not the intended audience, and it's meant for larger teams?

Have you been using tools like Vagrant or Ansimble?

I think the most general mistake is using docker "as essentially VM", with typical VM workflows, instead of a build automation and distribution tool.


👤 jesterson
It creates an illusion of environment where instances are something you can just spawn without thinking and "focus on main task".

This abstraction was quite appealing for developers and other who think we don't need to understand or pay attention to infrastructure anymore.

This caused the hype, which brought many organisations to chaos and subsequent driving away from docker. However bonuses have been paid, promotions for "innovations" have been executed.

Docker is a fine thing to play with, but it should be kept few thousand miles away from production.

Personally I don't see where it can fit, however large or small organisation is.


👤 resonious
All I can do is provide another anecdote, but I'm not sure how I'd provision servers without something like Docker. I guess there's ansible, but Docker is great since I can just put shell commands in the Dockerfile.

At work, we recently started using GitHub Codespaces for development. It uses Docker to set up the dev environment. It's been fantastic and legit has solved 99% of "works on my machine" problems.

What's causing your file-not-found issues? Surely it's not actually Docker, but rather some poorly written Dockerfiles.


👤 root_axis
All bets are off when you're dealing with Windows in the open source world, especially when it comes to something like Docker which was designed around the linux kernel and a unix-style perspective on systems.

Docker's core strength is that it provides your applications with process, network, and filesystem isolation on an individual host. Applications are very sensitive to their host environment and before the popularity of containerization a lot of effort was spent ensuring that host machines were provisioned in a manner that made them capable of supporting application needs. Things could often get hairy when deploying multiple applications to the same host due to conflicting lib versions, network ports, environment variables, file permissions etc. Things could also become precarious when there was a need to manage rollbacks across dependency upgrades on the host.

With docker (or a similar container solution), all of that pain is abstracted away, you don't have to think about the host machine at all, the container coddles your application so that it has a pristine working environment, conveniently configurable with environment variables.


👤 geoffeg
It seems like most of the value of Docker comes from the Dockerfile more than the containerization. The Dockerfile defines how to (build,) package and run an app, including the operating system dependencies. (That's not necessarily Docker specific, but I got the feeling you weren't asking only about Docker).

👤 kodah
> People say that Docker can run anywhere. It solves the infamous problem of "works on my machine".

Linux containers do that, not Docker. However, that statement isn't as universal as some readers might think. Linux containers are just that, for Linux. In all reality, it also depends on how the host kernel has been configured if you want two containers to run the same way. Windows has Windows Containers, and the BSDs have jails. They're all quite different because their host operating systems are all quite different. Windows and MacOS are usually running Linux containers in a hypervisor.

Docker when juxtaposed to manually burning container images and standing up containers as they exist from LXC is extremely useful. It also inspired the growth of the Docker Runtime.


👤 bin_bash
IMO it's a poor tool for local dev. It's just too much of a pain to work inside of a vm—or at least what's essentially a vm. The real value is for production deployments since the way you setup dependencies is consistent across different app types.

In theory it is also better because you can manage a fleet of containers just by looking at their memory and CPU usage and not really caring about what's inside, but I think the reality is that k8s is so complex it's not clear the tradeoff of finer-grained management is worth the difficulty of setting it up. Maybe, depends on your use-case.

People are saying the issue is that you're running it on Windows but I would counter that running on macOS can be just as bad if you're trying to run x86 containers on an ARM box.


👤 bogota
Docker is pretty awesome for deploying your services and having an isolated environment that is the same everywhere. Although with the new M1 arm chips this is slightly breaking down since arm and amd have inherent differences that get introduced to the build environment unfortunately.

Docker is abused as a developer environment though and I hate seeing teams try and force everyone to develop inside a container that eventually gets merged with the service itself and turns into a 2GB image nightmare.


👤 tbrownaw
It can take an unknown amount of persistent state (what's installed? how's it configured?) and replace that with a small file that's checked in to source control.

It can also help avoid issues when one person is working on multiple things that have conflicting dependencies.


👤 hsn915
I'm in a similar boat. I hate Docker. I haven't used it myself but seen it used at various companies, and to me as a programmer, it's a nightmare.

Docker "solves" the problem of Linux system configuration, but the solution it provides is literally the equivalent of "shove it under the rug". It doesn't solve the problem; it just attemps to hide it.

The way I solve the problem is by eliminating it; by making programs that don't require any system configuration.

Most people like to program in an interpreted language and use a separate engine to store and manage the data. The language runtime only has a "debug" server, so you need a separate http server and you need to configure it so that it can execute scripts written in your favorite language.

So this means the system/environment needs to be setup so that 1) an http server is installed and configured 2) the specific version of the interpreter is installed on the system 3) the database engine is setup and configured. I wish it stops here, but it gets much worse.

So what I do instead is use libraries for all the above. I use a compiled language. The http server is a library. The database engine is a library. Therefore, nothing needs to be configured in the environment. Just upload the binary and run it.


👤 Izkata
Our issues are even worse than that - we're not only all using linux, we're all on the same version of Ubuntu and still have constant "works on my machine" problems. Half the team can't even get some projects to run that were developed by another team, and the rest of us usually have to fiddle with them for a while before they'll run.

👤 Donckele
I’m quite good at resisting the latest hype and in the early years of docker it was exactly that and, more importantly, I had no reason to use it.

Some years ago I had some epic battles with deploying some fancy software in several environments including online and internally on-premise servers.

I do like challenges but some experiences get old fast. Understanding why your software is failing when deployed, fixing deployment configuration files etc. The cognitive load becomes heavy having to remember, check and fix. On top of that deployment environments always differ enough to somehow require individual attention.

So, thats when the lightbulb switched on in my head. Using (docker) containers has completely removed these adventures in deployment. And I’m very happy about this, I don’t have to deal with that stress anymore.

Now, with more experience and a black belt in container kung fu, I’m understanding what the hype around kubernetes may mean but I’m also sensing that a David is around the corner to knock down this Goliath.


👤 CSMastermind
Docker allows you to run multiple workloads on the same machine and have them all share a kernel while each having their own segmented user space so it's much less likely that processes will be able to interfere with each other. This is important for both the security and stability of the system and is similar to BSD Jails with the advantage of not being restricted to that operating system.

Running multiple heavy VMs which segment system resources gives you even better and more complete separation at the cost of being very performance intensive. The industry by and large has opted for containers as a compromise between reasonable separation and performance.

Also as Docker grew in popularity there has been an ecosystem of tooling built around it for container orchestration, image repositories, and development that make it worth using just to get access to the tooling that surrounds it.


👤 ArchD
Docker lets you document/codify the steps used to setup a machine or service.

The Dockerfile contains the steps for creating the image. With docker-compose, you can codify how to setup services.

This is better than the alternative of doing everything by hand, with manual commands and adhoc edits to random system config files. The manual way is error-prone and not easily repeatable.

I use docker-compose for our internal gerrit, wiki and XMPP server setup.

If I were to do this setup by hand e.g. on a real machine or VM with manual edits to random config files all over the place, it is not clear exactly what I need to backup or how I would rebuild the server if it were destroyed somehow.

With docker-compose, the data I need to backup is all in one directory on the host machine and the services/servers themselves can be recreated from the Dockerfile and docker-compose.


👤 tluyben2
Like others said; it is less than great under Windows, but from Linux to Linux or mac to Linux, it’s mostly painless. Most problems arise if you try to do anything outside containers; for instance having some stuff inside containers and then developing other things outside, under mac, can lead to (known) horrible network perf issues. However because it’s known, you just adjust your flow a bit on anutopilot (experience) and all will work indeed automatically on Linux.

I resisted it at first but now I cannot do without, neither in team projects nor personal ones. It can be improved but it is better than the alternatives that I know of.


👤 danwee
For teams where there are people on charge of:

- installing Docker on the servers

- keeping Docker up-to-date

- keeping an private image registry (or renting one)

- scanning regularly Docker images for vulnerabilities

then Docker is fine. But if you are a solo developer, Docker is probably overkill. I mean, working with Docker doesn't just mean "write your Dockerfile". It means all the bullet points I wrote above. Writing the Dockerfile is the easy part. If I rent a machine on Digitalocean for $5/month, then I have put on the "infra engineer" hat and install Docker on that machine, keep it up-to-date, I also need a way to keep my Docker images and scan them...


👤 aorth
I agree with you. I use containers for local development stuff (need a different version of PostgreSQL? Need some obscure thing and don't want to build it?) and for CI obviously, but not for production. The paradigms are just too different than what I'm used to–I prefer standard Linux deployments managed by Ansible, services deployed via systemd, etc. Podman is getting closer to what I'd one day be able to use for containers in production because it doesn't need root, doesn't mess with my iptables/nftables, is fully open source, etc.

👤 croo
I am (un)lucky enough to had to manage several test, QA and production server farms back in the more manual and bash scripty days. Log in 12 different VMS to set up proxies, ntp, release versions whatnot.

I once spent 3 weeks with several colleges to use a multiple hundred page documentation to set up shit and the error was always a bad IP address or wrong configuration or missing values from arcane places.

For me the value of docker was clear from the beginning and love on first sight.


👤 lallysingh
Yes you're doing it wrong. It's effectively Linux tech, and attempts to shoehorn it into Windows can only cause pain. Run it on Linux and it's very smooth.

I just had this discussion with a colleague. It does, semi-half-assedly but quite effectively, what nix promises. You can make hermetic images of not just binaries (that'd only be as good as static linking) but of entire machines (sans Linux kernel, because you're running what the host runs, but that's always Linux so it's fine, and when it's not Linux you're not having a good day).

Wrap that up in a stackable system (build your own images atop of others) and a way to quickly run processes out of images, and you've got yourself a highly portable and reusable system for apps. Want to build an app to run on a random k8s node in AWS? shove it in a docker image and now you've just got to write some yaml.

Got a few python libraries that you want packaged atop of all of pandas+jupyter+numpy+whateverelse? Reference a stock jupyter prebuilt image (there are many official ones you can start with) and add thew few packages you want in a Dockerfile, and bam, you've got a redistributable data science appliance as a docker file.

I can build docker images on my laptop and then run them on my crappy celeron nas for months without thinking twice about it.


👤 esalman
Our research lab has a team who develop all their pipelines in docker. Few months ago the IT department migrated all HPC nodes from CentOS 6 to Ubuntu. Rest of the lab have moved on but the docker team still hasn't, because all their workflows broke and they still keep insisting on not migrating some of the remaining nodes.

👤 awill88
Personally, docker is becoming boring and irritating. I was a huge fan, early adopter, let me ride in on this crazy train of software engineering. But after they inevitably went enterprise I died inside, end of an era.

Now that docker is the mainstay of beginner/intermediate engineering efforts, it’s just kinda meh. It’s not containers it’s the way communities prop weird stuff up like Docker is some prerequisite for moving fast.. and hey options exist like Bazel and Nix (and hopefully more to come) that build reproducible systems without always needing a hypervisor.

And also, I find it super irritating to not get any simplicity out of it unless you docker-compose.yaml. Which makes docker feel like a tool without batteries included..

Once ppl start using it as a dev container too it’s like gross Just my opinion though it’s also a fine choice for ballin out so I respect any and all hustle at the end of the day. To each their own.


👤 Kaotique
The containerisation part is where it shines. Collecting and running any dependency for your image with the Dockerfile make it very flexible and independent of the host platform.

I can agree that the problems start on the edge between the host platform and the containers like networking and mounted volumes, conflicting user and group ids.


👤 topspin
So many different versions of Java, Python, node, PHP, databases, etc. Docker makes it feasible to cope with it all.

I've taken to using s6 init to create self contained application stacks inside containers with log rotation, cron jobs, start/stop ordering, etc. Months later I can return to prior work and spin up the whole thing up on the first try.

Do whatever, tag, push, deploy.

Yes, there are big missing pieces. The whole one "process" per container mantra is tragic. Those that spout it don't even have a correct understanding of the term "process." Aligning UIDs between hosts and containers is awful. Many other hang-ups and glitches.

Not perfect. VMs and package managers streamlined to the ease of use that Docker delivers would be better. Much dysfunction has its roots in trying to containerize things that should be VMs.


👤 ldjkfkdsjnv
I can wrap a complex python deployment, dependencies, models, random stuff. All the ML libraries that are tough to install. Then I can push it into the cloud and run it. If it gets deleted, I can create the exact same image. Or revert back to an old one.

👤 alphabettsy
Can’t speak to issues with Windows and Docker, but teams I’ve worked on effectively used Docker to eliminate the “runs on my machine” problems. I’ve only used MacOS and Linux the last ten or so years and for me it works really well on both.

I’ve had one or two coworkers on Windows and they used WSL I think. Can’t comment on their experience.

The Dockerfile prescribes exactly what operating system and packages the container has. Docker Compose sets up dependencies in other containers and in most cases this was all that was required to run and test. It does require you to make some platform agnostic decisions on how you structures both but I find it easier than scripting.

Outside of that YMMV.


👤 cmckn
> I developed containers on windows

Yeah, don’t do that.


👤 deeteecee
I think with different OS's you're still gonna have some challenges but the fact is you still get better consistency, especially if you're on a Linux machine. I use the Mac and I have minor issues with the network, performance, and having to manually clean up my docker storage from time to time but I'll take that anytime over wondering why I can't install something or reproduce an issue in another env.

I don't see anything hype about it. You might just be frustrated with more of the minor issues.


👤 gitgud
> People say that Docker can run anywhere. It solves the infamous problem of "works on my machine". Despite what people claim, I have not found this to be the case. I developed containers on windows and then I still had to debug the containers when deploying on Linux.

It's not actually cross-platform as the containers run on the host kernel, meaning images are CPU-architecture-specific.

NPM also has this issue, where the OS architecture is important for some packages containing binaries


👤 bdcravens
The Windows issues you're describing I've seen in most iterations of development tooling over the years (Vagrant, developing directly on your machine, etc). If you have a VM with a shared volume, and you don't take care with file names and permissions, going Docker-less won't solve the problem for you.

👤 incomingpain
The 'hype' of docker was greatly reduced about a year ago, need not go into why as its off topic.

The major appeal of docker to me is it's a new/extra layer of security. You can run known vulnerable software inside and when it gets compromised. So what? You lack the tooling to pivot or escape the container.


👤 flakron
Depends on the use case I would say.

Anecdote: One of companies I worked for, smallish team size, due to different tools written in different languages needing all kinds of dependencies, docker was invaluable making the infrastructure easy to run without having those dependencies in the infra itself.


👤 nurettin
It is either docker, or you will have to write an install and run/service script for every OS you are going to deploy to.

I prefer to keep target operating systems uniform, so a docker layer isn't needed. Just an install script which downloads packages and copies systemd service files needed.


👤 Saphyel
A lot of people in this thread already mention the befenits so the big question would be why the hate? I kind of understand people that doesn't want to learn new things their against it... but everyone else? Do you dislike to make your life easier?

👤 ximm
My biggest concern is that the docker daemon runs as root which allows for some easy privilege escalations. Having docker installed basically gives everyone root access. I am not sure if this is still the case though.

👤 a_t48
I've never run it under Windows - was this regular Windows or Windows+WSL?

👤 jelled
Before we used docker I'd spend hours troubleshooting team members local development environments. Once we switched to a common source controlled docker-compose file all those problems simply went away.

👤 fulafel
The problem is believing the branding. The Docker company was calling the emulator version and the native version the same name, but they're very much not. It's a Linux native system in the end.

👤 forty
If you have issues with file ownership, rootless podman with --userns keep-id is a pleasure to work with. And all that without that messy daemon that forces you to be root and messes with your firewall.

👤 laserbeam
You are not alone in the tyrannical fight against "management says we have to use docker". Worry not. Join the resistance. (There should be a docker resistance manifesto somewhere...)

👤 simonw
Before Docker, especially when I was working with PHP and Python, my single biggest concern was always that I'd need to use some dependency that was difficult to install on my server.

Docker fixed that.


👤 revskill
Docker dev means he takes care of other team members. Selfishness in dev env setup only kills others.

It's not about Docker, it's a tool. But with Docker, you know how team work is done.


👤 eb0la
I miss containers on OpenBSD. Probably it can be done but I never had the time to research it.

👤 dboreham
It's kind of janky but it's the best thing we have right now.

👤 wallfacer120
Memory cheap, developer time expensive. Your welcome.

👤 block_dagger
Post like it’s 2016…docker is an invaluable tool for pretty much any kind of development. It’s the missing layer and it’s glorious.

👤 jiggawatts
> Am I alone here? Am I doing it wrong? Is it the case that I am not the intended audience, and it's meant for larger teams?

No. Probably not. Maybe, but also probably not, you may just not have had the "Ah-hah!" moment. A lot of technologies are like that.

Take any random Windows or Linux server someone else has built. Log in to it. Now tell me how it was built such that that build can be reproduced on a slightly different platform, e.g.: a different processor or base image.

Good luck with that.

Traditional operating systems are write only. Trying to get the configuration back out after the fact is effectively impossible in the general case.

Even if you have a "scripted process", "policy", "documentation", or "declarative state configuration", drift occurs anyway because of things such as disk corruption, updates applying out-of-order, different levels of updates, quick fixes/tests, etc, etc...

It is basically impossible to keep a large fleet of supposedly identical machines identical.

As a random example from the Windows world: I've seen Group Policy applied to user accounts such that when an admin user logs on to a server, that triggers an on-login install of some software. Could be anti-virus, a print utility or whatever. Suddenly pristine servers become... not so pristine.

People have tried to work around this kind of thing... clumsily.

For example, in the Windows world you can build standard operating environment (SOE) disk images for servers using a tool like System Center Configuration Manager (SCCM) task sequences. This can deploy Java runtimes, ODBC drivers, or whatever you business app needs. (Linux has things like Packer)

The principle is fine, the implementations however are terrible.

A complex SCCM Task Sequence might run for hours and then fail at step number 57 out of 83. Fix it, start from the beginning.

Oops, failed again after several more hours. Fix it, try again.

Success! Now step #62 is failing. Fix it, try again.

This can mean days down the drain. Wouldn't it be just awesome if the build system could take snapshots of the VM at each step, so that it could restart at the last known good step, making the edit-build cycle super fast?

Docker does that.

Build systems like SCCM task sequences or Packer aren't typically parameterised, or the parameters are "baked in" at build time, and/or get passed at annoying points such as a very early "network boot" step, which has technical and security limitations.

Docker does the right thing, simply passing in parameters as environment variables at runtime. This allows one image to be used in many environments, without changing it. If you test something in UAT, then production will by byte-for-byte identical (SHA256 hashed!), except for the minimum required config changes.

Typical build systems produce a single opaque binary image. This could contain anything, and the "link" to the source script that produced that build is immediately lost.

Docker maintains the step-by-step build script that lead to the final docker image, in the image itself.

Typical build/deployment systems require "full VMs" and associated images, which start from merely big at ~32 GB and go up to a way-too-big ~127 GB as is the typical case for servers in the cloud. These are mostly blank space, and need the wiggle-room for things such as after-the-fact updates.

Docker maintains an efficient content-addressed store of image layers. A small change on top of an existing chain of image layers might need just tens of megabytes. Deploying this to the production environment similarly requires only tens of megabytes, allowing very efficient blue-green deployments rolled out automatically via dev-ops. Existing deployments are maintained as-is and there's no side effects of a roll-back. Compared to this, updating any shared framework or system-wide installed component is very risky on any traditional operating system.

Traditional server management ends up growing "paperwork" and "processes" like crazy. This is required to work around the risks listed above. So for example, changes have to be submitted to a change control system, but this is not enforced in any mechanical way. The paperwork can say "X", but the physical change can be "Y". Or details may not be present. Or whatever.

Docker lets you check the "dockerfile" into Git. That's authoritative, versioned, and can be made subject to mandatory branch security policy such as forced code review. It can be set up such that no paperwork is needed. The code is the code, and if it's in the "main" branch it has been reviewed. The image it builds is the image that is in production, end of story.

DISCLAIMER: Having said all of the above, I have literally never deployed containers in production, despite working in the cloud for years. My issue is similar to yours: I'd have to convince a lot of people to change their fundamental workflow and embrace a "different way of doing things". This can be challenging, especially in large enterprise with unmotivated staff, unions, third-party support contracts, etc...


👤 KronisLV
I actually recently wrote a piece of software that can handle millions of documents, where I packaged it up in a Docker container and ran it first in my homelab, then moved over to the cloud, after which I scaled it to multiple parallel instances, here's the relevant blog post in the series: https://blog.kronis.dev/tutorials/3-4-pidgeot-a-system-for-m...

Nowadays almost all of the software that I write is in OCI compatible containers (typically Docker), because it lets you not care as much about what is inside of your container: if you follow the 12 Factor App principles, deploying a Java, .NET, Python, Ruby, Node etc. apps all looks the same from outside.

Same for healthchecks, same for scaling, same for restarts, same for exposing ports, same for limiting CPU/RAM resources. You could do a lot of it with either systemd or bunches of other configuration on a typical *nix distro (hopefully automated through Ansible or something like that), but it's just more convenient, if you buy into that workflow.

That said, for some tech stacks, Docker is hot garbage, for example, I've never worked with anything as annoying as PHP can be, line endings, file permissions and god knows what else messing everything up, especially if you're unlucky enough to have your dev box be on NTFS (e.g. Windows), about which I wrote a little rant called "Containers are broken": https://blog.kronis.dev/everything%20is%20broken/containers-...

Of course, I will still keep using containers, because that almost wholly decouples what OS my servers are running from what the applications need to run (I just need Docker or a similar runtime available for that OS) and I can have as many parallel instances of whatever I need (e.g. 5 MySQL instances, 4 PostgreSQL instances, 3 Node instances, 2 Redis instances etc.) all on the same node, each with their own configuration and limits. Both on the server and locally. The same goes for the apps that I might build and want to upgrade.

Seriously, you've no idea how liberating it is to be able to take bad or possibly insecure software and put it in its own little box where it can't mess too much up (especially relevant for something like Python, where the story around packaging software and environments can be a dumpsterfire, but also even simpler cases like Node/Ruby/JVM versions and PATH variables, as well as system packages) and do so easier than cgroups directly or systemd slices would let me.

I'd still like to ditch Windows for *nix on my dev box as well, but sadly some great software (MobaXTerm for example) and most games still don't work. Of course, one could just dual boot and thus also decrease the risks of games and such not being trustworthy, but for my personal stuff I'm kind of lazy and the *nix distro that sits on the other disk partition gets underused somewhat.


👤 rzimmerman
Doing Docker development on Windows eats away a lot of the benefit since you need to run Linux environments in a VM anyway. And yeah, you're going to have goofy stuff with line endings and file permissions. MacOS dev is better, partly because it's Unix-like but mostly because a lot of devs love their Macs enough to make Docker for Mac good enough.

There were supposed to be a lot of benefits to using Docker:

1. Build your environment anywhere and it will work anywhere. This is mostly true. Basic isolation of dependencies is a huge headache-solver for C/C++ development, Python apps, and anything with a complicated build chain. Go is a notable exception - the Go build process really did solve this without containers.

2. Process/resource/cgroup isolation. You can run multiple containers (even untrusted ones) on the same metal and they can't interact. It's like a VM but instead of committing fixed amounts of RAM and disk space (which mostly sits idle), you let the Linux kernel handle it dynamically with limits. At least that was the dream. But without carefully managing properties/permissions this is a security nightmare. When people say "why Kubernetes" this is one real answer. Almost no one trusts container isolation for actual security in practice.

3. Layered builds. This was a killer idea - you build your container in steps. The first few change infrequently and the last one or two is your actual app. You can ship the big OS-heavy layers once and push frequent small updates (a few MB) and still keep your dev and runtime environments identical. This breaks down a bit in the real world because a lot of people don't bother to layer things well - they wind up with 100MB-1GB monsters being pushed around endlessly. But also it kind of doesn't matter - you can push 1GB monsters on your own network/VPC all you want and it's not really a big deal.

4. Declarative builds - theoretically you can recreate a bit-identical build based on just the Dockerfile. This is great for auditing third-party images and reproducibility. Theoretically. But people forget that apt-get install on Monday might be different than apt-get install on Wednesday. Debian does security updates that way and an image built 6 months ago won't match one you build today. It still works, though, so we pretend not to notice. And sharing image layers with another host is actually kind of hard. I imagine Nix solves a lot of this but I haven't had the pleasure of learning it yet.

So Docker has fallen short on some promises and met others. But the truth is I can't think of a better way to build some fussy C project (the first thing I look for is a Dockerfile, and often it's easier to make one than pollute my system with dependencies). I also can't think of a better way to distribute a Python app. And being able to do "docker run debian:bullseye" is pretty slick. Golang is another story - I think it's very reasonable to say "what's the point?" with a go binary.


👤 dev_0
Docker is like the Java of Infra