HACKER Q&A
📣 sdevonoes

Do you run apps bare metal?


So, I am working on a side project and the way I deploy my (golang) application is basically:

- build binary

- copy binary, config files and static assets to the production server

- do blue green deployment (with nginx) to get zero-downtime deployment

- profit

(This is automated of course! I use Ansible, and I can easily rollback if needed. I can also deploy the same app to multiple machines if needed).

On my local machine I use Docker to test the Go code, but I don't really see the benefit of deploying my Go app in a container. My colleage told me "it's easier to deploy Docker containers. You just pull the image and voila!". I don't see how my approach could be "more complicated". Also, isn't my approach better in terms of performance? If my golang app runs "bare metal" instead of via a container, then sure the performance should be better, right?


  👤 whalesalad Accepted Answer ✓
It would help to clarify some words:

"Bare metal" means that your application is running on the same OS that is also running the raw hardware (metal), aka no virtualization. Containers are not (generally) the same as virtualization.

The analog to bare metal is virtualized, where the hardware your program is seeing is not necessarily the hardware that is running on the actual host machine.

A docker container could ostensibly be considered running on bare metal. A container is really just isolation but the parent OS/kernel is in command. Here is a graphic that illustrates the differences: https://www.sdxcentral.com/wp-content/uploads/2019/05/Contai...

What you are really asking is do you need an abstraction layer or orchestration tool to manage doing this.

The short answer is that no you do not need it at all. If you can DIY this and are happy with it, that is sufficient. For example, a current deployment process for one of my clients (EC2 environment) involves stopping a custom systemd service, pulling the new binary/deps and then starting the systemd service. Really simple with a small instant of downtime, but within this environment that is not a problem.


👤 andrewl-hn
We do it right now, though we run Node instead of Go. For us it's not 'copy a binary', but 'copy and untar', but it's still very simple.

One downside that we find surprizing is that way too many Ops people are more familiar or comfortable in cloud or Kubernetes environments these days. We find it hard to find local talent who are willing to deal with bare metal hardware.

Because of that we plan to migrate to k8s at some point later this year. I'm neither support nor oppose it, to be honest. Introducing an extra Docker build step is annoying, but the idea of adding a few hundreds of Yaml lines to get monitoring, log aggregation, tracing, etc. sounds really nice, too.


👤 SPascareli13
By "bare metal" you mean as a regular process in the OS instead of as a docker container?

Then it should really make no difference in terms of performance. A process in a docker container is also a process in the host OS, but with some network and user permission stuff on top.

When I read the title I thought you meant bare metal as with no OS layer, just a go app running in the processor, which I think would be pretty cool to see.


👤 KingOfCoders
Isn't bare metal without an OS? At least this is what I've assumed when I've read the question.

[Edit] Thanks for the comments, seems I'm just plain old.


👤 twic
Yes.

Our contract is that a unit of deployment is a .tar.gz file, and inside that there is an executable file bin/run which starts the app. We build apps in a few languages, but they all have a start script like that.

You can use static or dynamic linking, compiled or interpreted languages, anything you like, as long as the start script sets it up correctly. For Go apps the script can be trivial.

Tarballs must be self-contained - binaries, libraries, config files, static assets. We make an exception for interpreters, because those are huge and change slowly. Those are manually installed in /opt/something. I'd like it if we did something more disciplined and reproducible, but it's good enough for now.

We don't have many machines. Maybe 20? You can get a lot done with 20 full-size physical machines.

I don't see how something like Docker would help us here. It would just be the same, but less hackable.


👤 _wldu
Yes. Go static binaries are super useful and simple to deploy. IMPO, this is one of the reasons Go has become so popular.

Docker became popular because it fixed a lot of deployment issues for dynamic scripting languages. And really, Go static binaries and/or Java jars solve that same problem. So there is no need to complicate these deployments with Docker.

I do dockerize some Go binaries to run in ECS/Fargate, but otherwise I see no real need or reason to do so.


👤 analognoise
Yes, I frequently don't use any operating system and write bare metal C code, usually for Zynq (normally RFSoC or MPSoC class) FPGAs, although I also frequently use a Microblaze soft processor with it's entire memory in BRAM when doing initial system bring up (say for a new board).

Lots of people in here don't know what "Bare metal" means. There's no operating system - at all. Anything else is not bare metal.

To go further: Sometimes (if you're bringing up a new processor design) you don't even get cstdlib - you have to write it (or the processor's architecture won't even support C!). "Bare metal" means exactly that - it isn't the same as "no virtualization"... at all.


👤 codegeek
I think you have it right. Don't over engineer it especially early on. If you need to scale/automate with containers later, do it then. For now, just get that damn thing up and running.

I built a Go web app side project which runs smoothly on a VPS with nginx as reverse proxy, letsencrypt SSL and dead simple supervisor config to run. Boom. Forget containers. Forget docker.


👤 jasonpeacock
Ignoring the proper use of "bare metal", you should review the pros/cons of deploying with Docker vs SCP.

Docker is helpful when you have a lot dependencies, or otherwise need to create a reproducible image (configuration, special directory structure, etc.) or when you want to control the process's resources (sandboxing).

Otherwise, because Go compiles into single binaries, if there's no other external dependencies/configuration to manage and you don't need process isolation/sandboxing, then Docker is just another moving part that adds complexity and could go wrong.


👤 andrewstuart2
Your performance should not noticeably differ (on linux) between docker and just running as an uncontainerized process on the host. There's almost no difference as far as the kernel is concerned; it just tags your process slightly differently (different kernel namespaces and cgroups, but these apply to every process).

Bare metal usually refers to running without hardware or software virtualization (kernels within kernels), and if that were the case (docker on linux does not virtualize) it would be meaningfully faster, but that's not the case here.

The primary benefits you get from Docker may not be immediately apparent, so go for whatever works. Eventually, you'll probably find that it's substantially easier to scale out appropriately and automatically with some sort of orchestration tech, which is nearly all built on containers these days. Don't sweat it til you need it but at the same time, it's easier to learn sooner than when you're knee deep in tech debt for a dozen services.


👤 sys_64738
Bare metal generally means running on the host OS without any virtualization layer. Or used to mean that.

👤 gnfargbl
There's nothing really wrong with what you're doing. Apps were deployed like that for decades before docker came along.

To my mind the main benefit of containerization would be that the

> binary, config files and static assets

would be bundled together inside the container, so there is less risk of you running a half-baked version where (e.g.) the binary gets updated but the other files don't. However since you're using Ansible to deploy, that risk already feels small. You could also consider using the embed feature in go1.16 to bundle all the assets inside the binary.

Arguably there might also some security benefit to running in a container, but I wouldn't want to try and make that argument without knowing a lot more about the specific details of your binary's behaviour.


👤 verbury
The argument for using containers is primarily in your first two steps, far less so for deployment and profit(!). Having all of your apps code and dependencies in a self-contained, er...container, that was portable between different systems and perhaps more importantly, different developers, is arguably the main reason for using Docker.

If for now it's just you and you're not planning on spinning up in AWS ECS, CloudRun, any flavour of Kubernetes etc. then go with what works for you. All good.


👤 its_nikita
Not sure if this is applicable to your case, but if the application is/can be used by an end user, then having it available as a Docker image can be really useful to the user, and reduces friction to try out your program.

Many times while looking through a project I find on GitHub, if they have a docker image available to try it out, I can try it by running a single command. But if they have more than 3 installation steps, I usually pass.


👤 siscia
Just to expand.

Container technology run your software just as it was a standard process.

There are no performance implications in running code inside or outside the container.

Here be careful, that on MacOS docker is implemented as a virtual machine. In such case there could be somehow serious performance implications. Especially around IO.


👤 cridenour
You have it right. Now of course if your project expands and has a front-end build step or some other process, etc. then maybe it's helpful to add a container to not have to get that working in two environments.

But with just a Go binary, I think you're doing it right.


👤 blcknight
I don't think bare metal means what you think it means.

👤 IceWreck
For Go applications, docker is unnecessary. Its just a single binary and systemd is enough. You don't need containers for it.

👤 0xdba
I think most containers also run "bare metal", they just are more isolated as far as process/memory/fs, etc.

Having worked with and without Docker for various web apps, it removes some dependency management and server setup at the cost of another layer (or two...) of complexity. It's not always worth it to go to containers.

Docker seems to make the most sense in cloud environments that scale horizontally.

What benefits would you get exactly?


👤 PaulHoule
Yes.

Try to deploy code without Docker and you have one problem.

Try to deploy code with Docker and you have two problems.


👤 raverbashing
If it works for you don't sweat it.

Docker is fine but if you don't need it then fine (and managing it yourself is usually a pain). But do make sure your systems are up to date (basically your linux distro)


👤 jeffreyrogers
Yes, I do this too (although you're probably running on a hypervisor unless its your own hardware, so not completely bare metal). This is one of Go's biggest advantages. Deployment is way simpler since you don't have to configure a bunch of dev ops technologies.