However, hosted K8s options have improved significantly in recent years (all cloud providers have Kubernetes options that are pretty much self-managed), and I feel like with LLMs, it's become extremely easy to read & write deployment configs.
What's your thoughts about adopting K8s as infrastructure early on (say, when you have initial customer fit and a team of 5+ engineers) and standardizing around it? How early is too early? What pitfalls do you think still exist today?
For my own projects I use a managed Northflank cluster on my own AWS account and likewise... just a fantastic experience. Everything that Heroku could and should have been. Yes the cluster is a bit pricey to stand up both in terms of EC2 compute and management layer costs, but once it's there, it's there. And the costs scale much more nicely than shoving side projects onto Heroku.
At this stage I consider managed k8s my default go-to unless it's something so lightweight I just want to push it to Vercel and forget about it.
Even in some very non-startup enterprises, Cloud Foundry and Open Shift get adopted for a reason: some teams don’t need the overhead.
For startups there’s fly.io, render.com, and of course Heroku, but really — you can get from MVP to pretty decent scale on AWS or GCP with some scripts or Ansible.
Use k8s if you need it. It’s pretty well-proven. But it’s not something you need to have FOMO about.
PS. Edit for clarity.
For hobby projects that I don't really plan to scale, I've recently gotten back into non-containerized workloads running off of systemctl in an Ubuntu VM. It feels pretty freeing not to worry about all the cruft, but that will bite me if something ever does need to live on multiple servers.
Maybe you need a cluster per client and k8s is the only option.
Maybe you literally only need a few docker services and swarm/ecs/etc are fine forever.
What is the problem that K8s solves for you?
* Kubernetes is great for a lot of things, and I think there's many use cases for it where it's the best option bar none
* Particularly once you start piling on requirements - we need logging, we need metrics, we need rolling redeployments, we need HTTPS, we need a reverse proxy, we need a load balancer, we need healthchecks. Many (not all!) of these things are what mature services want, and k8s provides a standardized way to handle them.
* K8s IS complex. I won't lie. You need someone who understands it. But I do enjoy it, and I think others do too.
* The next best alternative in my opinion (if you don't want vendor lock in) is docker-compose. It's easy to deploy locally or on a server
* If you use docker-compose, but you find yourself wanting more, migrating to k8s should be straightforward
So to answer your questions, I think you can adopt k8s whenever you feel like it, assuming you have the expertise and are willing to dedicate time to maintaining it. I use it in my home network with a 1 node "cluster". The biggest pitfalls are all related to vendor lock in - managed Redis, Azure Key Vault. Hyper specific config related to your managed k8s provider that might be tough to untangle. At the same time, you can just as easily start small with docker-compose and scale up later as needed.
On the other hand, if you are thinking of using bare VMs, then better go with managed K8s. I think in 2025 it's a draw in terms of initial setup complexity, but managed K8s doesn't require constant babysitting in my experience, unlike VMs, and you are not sinking hours into a bespoke throwaway setup.
But don’t discount bare-metal first! I see a lot of K8s or other cluster managers being used to manage underpowered cloud VMs, and while I understand the need for an orchestrator if you’re managing dozens of VMs, I wonder - why do you need multiple VMs in the first place if their total performance can be achieved by a handful of bare-metal machines?
Disclosure: I’m the author of skate
* how much downtime can be tolerated during a deploy or outage? load balancing and multi-region is more $$$.
* if you have a bunch of linux nerds and an efficient app -- a nginx webserver + your app + Postgres DB and Ansible to manage a single VM with Cloudflare in front of it might be a good option. Portainer in the VM is nice if you want to go with containers.
* if you have a bunch of desktop devs, containers and build pipelines with PaaS are a good option. many are resilient and have HTTPs built in.
* the smaller your infra/devops team, the more i would leverage team knowhow and encourage PaaS offerings.
* the smaller your budget, the more creative you need to be (ec2/storage accounts as part of hosting, singular monolithic VM has relatively flat costs, what free stuff do i have on my cloud provider, etc)
I think using Kubernetes effectively in 2025 is more about what you _don't_ use than what you _do_ use. As an early stage startup you can get a long way with no RBAC, no network policies, no auto-scalers, and even no stateful workloads. You can use in-cluster metrics metrics and logging before you need to turn to Prometheus, Loki, etc. Use something managed like AWS EKS.
Try to solve your problems first by taking away, and only if that isn't feasible then start adding. Plain old Deployments will get you a long way.
Now this next bit is going to sound like a pitch, and that's because it is – but when those free credits start running out, your bill starts reaching mid-four-figures, and you start thinking about your first DevOps hire, _call us_. Just for 30 minutes. We can migrate you out your cloud infra and onto a nice spacious bare metal k8s cluster, and we'll become your 24/7 on-call DevOps team. We'll get woken up in the night when things break, not you. And core-for-core it will cost a lot less than AWS.
The fact that we can do all that is a testament to how expensive AWS really is. K8s is a good choice if you keep it simple, positions you well for growth in the future, and for a cluster under a couple of hundred cores it is going to be pretty economical to run it in the public cloud.
PS. Link in bio
The real challenge isn’t setting up the EKS cluster, but configuring everything around it: RBAC, secrets management, infrastructure-as-code, and so on. That part takes experience. If you haven’t done it before, you're likely to make decisions that will eventually come back to haunt you — not in a catastrophic way, but enough to require a painful redesign later.
P.S. If your needs are simple, consider starting with Docker Swarm. It's surprisingly low maintenance compared to Kubernetes, which has many moving parts and frequent deprecations from cloud providers. Feel free to drop me an email. I can share a custom Python tool I've written long time ago to automate the initial setup via the AWS API.
Same goes for all tech choices. If you already know it, you understand its pros and cons and it still seems like the simplest best option for the concrete thing you need to build right now, use it.
Otherwise, use whatever alternative tech fits that description instead!
The time sink required for the care and feeding just isn't worth it. I pretty much have to dedicate one engineer about 50% of the year to keeping the dang thing updated.
The folks who set it all up did a poor job. And it has been a mess to clean up. Not for lack of trying, but for lack of those same people being able to refine their work, getting pulled into the new hotness and letting the clusters rot.
Idk your workload, but mine is not even suited for K8s... The app doesn't like to scale. And if the leader node gets terminated from a scale down, or an EC2 fails, processing stops while the leader is reelected. Hopefully not another node that is going down in a few seconds... Most of the app teams stopped trying to scale their app up and down because of this ...
I would run on ECS if AWS was my cloud at a start up. Then if scaling was getting too crazy, move to EKS.
But for the love of God ... Keep your monitoring and logging separated from your apps. Give it its own ECS cluster, or buy a fully managed solution. It is hard to record downtime if your monitoring goes down during your K8s upgrade.
At early stage the product should usually be a monolith and there are a LOT of simple ways to deploy & manage 1 thing.
It’s simple, no “cloud locki in” (which is really over exaggerated). The only reason to use K8s is for resume driven development. Which honestly is not a bad idea in an of itself because your startup is going to statistically fail and you might as well use the experience to get another job.