- build binary
- copy binary, config files and static assets to the production server
- do blue green deployment (with nginx) to get zero-downtime deployment
- profit
(This is automated of course! I use Ansible, and I can easily rollback if needed. I can also deploy the same app to multiple machines if needed).
On my local machine I use Docker to test the Go code, but I don't really see the benefit of deploying my Go app in a container. My colleage told me "it's easier to deploy Docker containers. You just pull the image and voila!". I don't see how my approach could be "more complicated". Also, isn't my approach better in terms of performance? If my golang app runs "bare metal" instead of via a container, then sure the performance should be better, right?
"Bare metal" means that your application is running on the same OS that is also running the raw hardware (metal), aka no virtualization. Containers are not (generally) the same as virtualization.
The analog to bare metal is virtualized, where the hardware your program is seeing is not necessarily the hardware that is running on the actual host machine.
A docker container could ostensibly be considered running on bare metal. A container is really just isolation but the parent OS/kernel is in command. Here is a graphic that illustrates the differences: https://www.sdxcentral.com/wp-content/uploads/2019/05/Contai...
What you are really asking is do you need an abstraction layer or orchestration tool to manage doing this.
The short answer is that no you do not need it at all. If you can DIY this and are happy with it, that is sufficient. For example, a current deployment process for one of my clients (EC2 environment) involves stopping a custom systemd service, pulling the new binary/deps and then starting the systemd service. Really simple with a small instant of downtime, but within this environment that is not a problem.
One downside that we find surprizing is that way too many Ops people are more familiar or comfortable in cloud or Kubernetes environments these days. We find it hard to find local talent who are willing to deal with bare metal hardware.
Because of that we plan to migrate to k8s at some point later this year. I'm neither support nor oppose it, to be honest. Introducing an extra Docker build step is annoying, but the idea of adding a few hundreds of Yaml lines to get monitoring, log aggregation, tracing, etc. sounds really nice, too.
Then it should really make no difference in terms of performance. A process in a docker container is also a process in the host OS, but with some network and user permission stuff on top.
When I read the title I thought you meant bare metal as with no OS layer, just a go app running in the processor, which I think would be pretty cool to see.
[Edit] Thanks for the comments, seems I'm just plain old.
Our contract is that a unit of deployment is a .tar.gz file, and inside that there is an executable file bin/run which starts the app. We build apps in a few languages, but they all have a start script like that.
You can use static or dynamic linking, compiled or interpreted languages, anything you like, as long as the start script sets it up correctly. For Go apps the script can be trivial.
Tarballs must be self-contained - binaries, libraries, config files, static assets. We make an exception for interpreters, because those are huge and change slowly. Those are manually installed in /opt/something. I'd like it if we did something more disciplined and reproducible, but it's good enough for now.
We don't have many machines. Maybe 20? You can get a lot done with 20 full-size physical machines.
I don't see how something like Docker would help us here. It would just be the same, but less hackable.
Docker became popular because it fixed a lot of deployment issues for dynamic scripting languages. And really, Go static binaries and/or Java jars solve that same problem. So there is no need to complicate these deployments with Docker.
I do dockerize some Go binaries to run in ECS/Fargate, but otherwise I see no real need or reason to do so.
Lots of people in here don't know what "Bare metal" means. There's no operating system - at all. Anything else is not bare metal.
To go further: Sometimes (if you're bringing up a new processor design) you don't even get cstdlib - you have to write it (or the processor's architecture won't even support C!). "Bare metal" means exactly that - it isn't the same as "no virtualization"... at all.
I built a Go web app side project which runs smoothly on a VPS with nginx as reverse proxy, letsencrypt SSL and dead simple supervisor config to run. Boom. Forget containers. Forget docker.
Docker is helpful when you have a lot dependencies, or otherwise need to create a reproducible image (configuration, special directory structure, etc.) or when you want to control the process's resources (sandboxing).
Otherwise, because Go compiles into single binaries, if there's no other external dependencies/configuration to manage and you don't need process isolation/sandboxing, then Docker is just another moving part that adds complexity and could go wrong.
Bare metal usually refers to running without hardware or software virtualization (kernels within kernels), and if that were the case (docker on linux does not virtualize) it would be meaningfully faster, but that's not the case here.
The primary benefits you get from Docker may not be immediately apparent, so go for whatever works. Eventually, you'll probably find that it's substantially easier to scale out appropriately and automatically with some sort of orchestration tech, which is nearly all built on containers these days. Don't sweat it til you need it but at the same time, it's easier to learn sooner than when you're knee deep in tech debt for a dozen services.
To my mind the main benefit of containerization would be that the
> binary, config files and static assets
would be bundled together inside the container, so there is less risk of you running a half-baked version where (e.g.) the binary gets updated but the other files don't. However since you're using Ansible to deploy, that risk already feels small. You could also consider using the embed feature in go1.16 to bundle all the assets inside the binary.
Arguably there might also some security benefit to running in a container, but I wouldn't want to try and make that argument without knowing a lot more about the specific details of your binary's behaviour.
If for now it's just you and you're not planning on spinning up in AWS ECS, CloudRun, any flavour of Kubernetes etc. then go with what works for you. All good.
Many times while looking through a project I find on GitHub, if they have a docker image available to try it out, I can try it by running a single command. But if they have more than 3 installation steps, I usually pass.
Container technology run your software just as it was a standard process.
There are no performance implications in running code inside or outside the container.
Here be careful, that on MacOS docker is implemented as a virtual machine. In such case there could be somehow serious performance implications. Especially around IO.
But with just a Go binary, I think you're doing it right.
Having worked with and without Docker for various web apps, it removes some dependency management and server setup at the cost of another layer (or two...) of complexity. It's not always worth it to go to containers.
Docker seems to make the most sense in cloud environments that scale horizontally.
What benefits would you get exactly?
Try to deploy code without Docker and you have one problem.
Try to deploy code with Docker and you have two problems.
Docker is fine but if you don't need it then fine (and managing it yourself is usually a pain). But do make sure your systems are up to date (basically your linux distro)