HACKER Q&A
📣 koliber

Is anyone not using containers in production nowadays?


Docker and kubernetes are everywhere.

Not every projects needs hot-scalable infrastructure.

Is anyone still using something like Ansible to push changes to VPCs? Are there any updated best practices out there for managing non-container-based deployments? What would a modern non-container deployment of nginx+DB+YOUR_FAVORITE_FRAMEWORK look like?


  👤 logicalmonster Accepted Answer ✓
OP, is your only experience working with technology companies and trendy startups trying to hit some mega-million valuation?

The real world is a lot more messy than the best practices that tech companies come up with. Go work for some local warehouse or AC repair company's IT departments. They might have some technology for managing inventory or scheduling appointments or billing or any one of a hundred tasks that isn't done in the most modern way.

There's many industries where technology isn't valued or emphasized and that are still using code and some practices from 10-15+ years ago due to financial constraints. And that's fine, as long as it works and is secure. Many times it isn't, but it can work just fine even if it doesn't look modern.

I'd bet the vast majority of websites out there are not using containers today.

I'd bet there's some pretty big and profitable websites out there that probably deploy stuff via uploading a bunch of files via SFTP (or even FTP) and perhaps running some deployment script that's been hacked together a decade ago.

Hell, we talk a lot about the market being consumed by React and languages like Python. But jQuery is still ridiculously widely used. PHP is basically persona non-grata on HN, but its market share among overall websites is still pretty high.

I'm kind of mad that the community here killed codevark's answer because that's an example of the messy world we live in that gives a partial answer to OP's question.


👤 AlexITC
Yes, I work with many small businesses where K8s/Containers are overkill.

I have also helped to remove K8s from companies that were experiencing painful points with it, these companies are know very happy with plain-old VMs where their apps just work with minimal maintenance.

I personally choose simple deployment scripts for static webapps, this example works like a charm and runs in under 10 seconds:

- `npm run build && scp page.zip webserver:~/ && ssh webserver "unzip -o page.zip -d /var/www/html/"`

In other more complex projects, I tend to go with ansible, I have 5 year old scripts that still work, I do a few tweaks per year.

A customer uses a similar approach with shell scripts executed as CI/CD by AWS CodePipeline.

Related to the tech stack, nginx is my favorite reverse proxy, I know, everyone talks about Caddy but having used nginx for years, I rarely hit anything I don't know how to handle, and, the community has already posted answers for most common nginx use cases.


👤 Havoc
It's not just scalability. Things like docker and k8s bring a lot of standardization of workflows to the table too. Abstracting away some of the differences of underlying hardware etc.

But ansible is cool too.


👤 scrubs
Finance is my domain ... worked/working for large established companies ... no containers or k8s. Yes, there's some presence for R&D or maybe external web or external search but absolutely nothing in front, middle, back office or connections to brokers, exchanges ie. where it's serious.

Now having written that the CD part does come with eye rolling ... i think of last emplyor which seemed to go out of is way to make prod changes small as possible, unorchestrated, paternalistic. Dev installs looked rather different than beta or prod so there was no incentive to improve. And that was fine with management.

Ultimately I think management didn't trust devs. So rather than process improvement it was better to go slow, beurceatric, make devs get approvals as implicit disincentive to change. In their mind devs were qino (quality in name only). Tls and middle managers did QA by approving or rejecting tickets.

Usually good companies think quality is everybody's problem and empower + train accordingly. And nobody serious about quality thinks QAing tickets is quality. No, it spends resources only. That's the only thing 100.0% of all people can agree on when it comes to inspecting quality into CD.

I'm fine with prod approvals but not for blow-by-blow changes

My current employer has far far stringent rollout requirements so elastic scaling et al isn't in scope. It's very planned, determative.


👤 cookiengineer
A lot of low budget hosting industry areas focus on easy hosting.

SMEs in particular usually never use Docker or lxc or any other container because usually it's an overengineered approach to a simple php website.

However if you want to develop for these kind of things I found golang to be my trustworthy companion because bundling all assets with go:embed is easy as pie.

I usually just have a self contained binary deployed to those systems and +net capability on the binary and it works.

There are a bunch of hosting providers that try to fix this, though, and focus on "quick respawns" of containers for php web apps. They are rare though. Usually they try to run a two clouds strategy where there is a backup cluster that can be setup quickly when there's a flood of DDoS in either one.

What I also wanted to leave here is this: Every nginx, every ip translation table, every load balancer, every VM will cost you a lot of throughput.

Usually a load balancer is the lazy coder's cache alternative, instead of just making their assets static in the first place. Starts with the webpack bundles and ends somewhere down the HTTP ETags and Pragma headers.

Also in regards to efficiency: check out eBPF, it's definitely worth it. The sheer scale of what eBPF and XDP can do in terms of network requests is absurd to say the least.


👤 dineshkumar_cs
If you've a small team, it doesn't make sense to have docker/established stack, and managing k8s needs experience and still is a pain.

You could get away with copying artifacts over ssh for a while.

Deployments could still be via Github actions or any other CI/CD tool or even scp.


👤 skimdesk
There's Dokku [0] which supports buildpacks (and containers). Would love to hear more from anyone runnkng a business on it.

[0] https://dokku.com/


👤 DamonHD
1) I don't use containers.

2) I am using Ansible, to push out changes to the couple of RPis that now cover ~90% of my Internet-visible services (HTTP, DNS, SMTP, ...).


👤 sneed_chucker
I can tell you first hand that lots of stuff in FAANG data centers is running non containerized in baremetal or on VMs.

👤 jiripospisil
> Docker and kubernetes are everywhere.

I'm willing to bet a dollar not even 1% of deployments use these. The world is a pretty big place.


👤 thexa4
We standardized on debian packages for configuration management and code deploys. The packages are built during CI and then either pushed directly to the desired machine using ssh or published on our own debian package repository.

👤 codegeek
Funny you say that because we don't use containers yet and definitely have no reason to use Kubernetes. However, we are looking at switching to AWS ECS with fargate for our SAAS. If anyone wants to consult, hit me up.

👤 SkyPuncher
We use containers simply because of how easy and repeatable the builds are.

The basic usage is pretty simple and straightforward.


👤 felcro
I use Python/bash, scp/ssh, and screen.