IMO, using containers on a server is a bit like using Flatpak for your desktop apps. You can have a stable, well-configured base and still run the cutting edge stuff pre-packaged in a convenient way.
But like, if you don't need it, you don't need it. It depends on what you're doing. If you have multiple runtime dependencies, it will help a lot. If your target is a single static executable (eg you use golang or java with a all-in jar target) and no runtime dependencies like a database (or even if your only runtime dep is a single SQL database connection or just a single JVM runtime and your only hassle is documenting the correct JVM version to use) then it may not be adding that much value especially early on.
Never found it too hard to add it, either. A Dockerfile is a glorified shell script so if you have a README for installing the whole mess on a normal machine, following that README again (and as always, finding how inaccurate it is lol) is all you really need to do.
For the typical enterprisey legacy project with over a dozen runtime deps and micro and macro (lol) services all over... yeah, it helps a ton to avoid tons of version hell deployment works-on-my-machine drama. It was so nice when it started coming out back in the early teens -- so much so, that it was worth using for all nonprod even when it was still super flaky and not suitable for production yet (eg remember how bad its disk space leaks used to be and how you needed to script that yourself?)
It was hard to pick up at first, but now it's undeniably an improvement to my workflow.
If you work with other people, then yes, definitely use Docker. It adds a layer of foolproofing that's much needed in larger teams, and it's basically a lingua franca for deploying things these days.
K8s etc - no, except if your product is really really big _and_ complex. The rest are using it because a) no better alternatives b) mindlessly following hype
I thought, why can I not run shell scripts to pull my code from git, update dependencies to requirements.txt or the package.json, and simply use pm2 as the webserver host and reload all the apps? So much easier to deploy, since installing pm2 and some shell scripts with env variables for deployment just takes about 5 minutes, right? No need to securely transfer or keep these images with weird code! Thing is, I was ignoring the pertinent use cases...
You have to understand the best ways to use it for your scenario and how it is typically used.
Use case 1. Prototype demo using funky services like file conversion - Running "docker run fileconversionapi" is as fast as grabbing an API key, but no worries about usage billing! And it's super easy to network for local or eventually production, either with host network, or docker swarm. Value - Self-reliance.
Use case 2. Actually self hosting apps.. might include a long running database docker image etc, using a local mapped volume to avoid data loss. Super quick, easy defaults. Value - Privacy
Use case 3. Live production app - building from working code
This is where people go wrong. They have a cool python server script with loads of dependencies and even some data / model files and want to put it on production. Rather than run "pip install -r requirements.txt" on each new server or their 1st cloud server, they turn to docker.
Common mistakes - they try and put environment variables in the wrong place (in the code itself, in ENV) or they try and put the code into the docker image with the requirements/keys... e.g. git clone Use case 4. Live production app - rolling updates / regular code / sharing image for developer nr 2. and 3. to instantly onboard
You can switch easily to AMIs with startup commands than exec docker code. You can rely on older AMIs or use kubernetes. Networking is simple generally. A docker contain does not know about the VPC, so it can't lay expectations that cause network hell in particular. - Value: high level control of the complexity in a range of deployment options, from single instance to multiple instance. Massive Value - you can move docker images between cloud providers as .tar files and instantly set up your infrastructure with valid letsencrypt SSL certs and everything from the last host. Use case 5. You want to clearly establish a separate microservice, maybe because your new code requires python 3.10 not 3.7 or something. You're not even sure if you'll deploy it, you are just testing, but if it works, you need it in production tomorrow. Well anybody reading "PYTHON310" in the short docker image can tell why you did that a lot more obviously than a random VM once named "python 3.10 code" that now does random machine learning things. Value - throwawayability