Not every projects needs hot-scalable infrastructure.
Is anyone still using something like Ansible to push changes to VPCs? Are there any updated best practices out there for managing non-container-based deployments? What would a modern non-container deployment of nginx+DB+YOUR_FAVORITE_FRAMEWORK look like?
The real world is a lot more messy than the best practices that tech companies come up with. Go work for some local warehouse or AC repair company's IT departments. They might have some technology for managing inventory or scheduling appointments or billing or any one of a hundred tasks that isn't done in the most modern way.
There's many industries where technology isn't valued or emphasized and that are still using code and some practices from 10-15+ years ago due to financial constraints. And that's fine, as long as it works and is secure. Many times it isn't, but it can work just fine even if it doesn't look modern.
I'd bet the vast majority of websites out there are not using containers today.
I'd bet there's some pretty big and profitable websites out there that probably deploy stuff via uploading a bunch of files via SFTP (or even FTP) and perhaps running some deployment script that's been hacked together a decade ago.
Hell, we talk a lot about the market being consumed by React and languages like Python. But jQuery is still ridiculously widely used. PHP is basically persona non-grata on HN, but its market share among overall websites is still pretty high.
I'm kind of mad that the community here killed codevark's answer because that's an example of the messy world we live in that gives a partial answer to OP's question.
I have also helped to remove K8s from companies that were experiencing painful points with it, these companies are know very happy with plain-old VMs where their apps just work with minimal maintenance.
I personally choose simple deployment scripts for static webapps, this example works like a charm and runs in under 10 seconds:
- `npm run build && scp page.zip webserver:~/ && ssh webserver "unzip -o page.zip -d /var/www/html/"`
In other more complex projects, I tend to go with ansible, I have 5 year old scripts that still work, I do a few tweaks per year.
A customer uses a similar approach with shell scripts executed as CI/CD by AWS CodePipeline.
Related to the tech stack, nginx is my favorite reverse proxy, I know, everyone talks about Caddy but having used nginx for years, I rarely hit anything I don't know how to handle, and, the community has already posted answers for most common nginx use cases.
But ansible is cool too.
Now having written that the CD part does come with eye rolling ... i think of last emplyor which seemed to go out of is way to make prod changes small as possible, unorchestrated, paternalistic. Dev installs looked rather different than beta or prod so there was no incentive to improve. And that was fine with management.
Ultimately I think management didn't trust devs. So rather than process improvement it was better to go slow, beurceatric, make devs get approvals as implicit disincentive to change. In their mind devs were qino (quality in name only). Tls and middle managers did QA by approving or rejecting tickets.
Usually good companies think quality is everybody's problem and empower + train accordingly. And nobody serious about quality thinks QAing tickets is quality. No, it spends resources only. That's the only thing 100.0% of all people can agree on when it comes to inspecting quality into CD.
I'm fine with prod approvals but not for blow-by-blow changes
My current employer has far far stringent rollout requirements so elastic scaling et al isn't in scope. It's very planned, determative.
SMEs in particular usually never use Docker or lxc or any other container because usually it's an overengineered approach to a simple php website.
However if you want to develop for these kind of things I found golang to be my trustworthy companion because bundling all assets with go:embed is easy as pie.
I usually just have a self contained binary deployed to those systems and +net capability on the binary and it works.
There are a bunch of hosting providers that try to fix this, though, and focus on "quick respawns" of containers for php web apps. They are rare though. Usually they try to run a two clouds strategy where there is a backup cluster that can be setup quickly when there's a flood of DDoS in either one.
What I also wanted to leave here is this: Every nginx, every ip translation table, every load balancer, every VM will cost you a lot of throughput.
Usually a load balancer is the lazy coder's cache alternative, instead of just making their assets static in the first place. Starts with the webpack bundles and ends somewhere down the HTTP ETags and Pragma headers.
Also in regards to efficiency: check out eBPF, it's definitely worth it. The sheer scale of what eBPF and XDP can do in terms of network requests is absurd to say the least.
You could get away with copying artifacts over ssh for a while.
Deployments could still be via Github actions or any other CI/CD tool or even scp.
2) I am using Ansible, to push out changes to the couple of RPis that now cover ~90% of my Internet-visible services (HTTP, DNS, SMTP, ...).
I'm willing to bet a dollar not even 1% of deployments use these. The world is a pretty big place.
The basic usage is pretty simple and straightforward.