HACKER Q&A
📣 herval

Is Kubernetes still a big no-no for early stages in 2025?


It's a commonly-repeated comment that early stage startups should avoid K8s at all cost. As someone who had to manage it on a baremetal infrastructure in the past, I get where that comes from - Kubernetes has been historically hard to setup, you'd need to spend a lot of time learning the concepts and how to write the YAML configs, etc.

However, hosted K8s options have improved significantly in recent years (all cloud providers have Kubernetes options that are pretty much self-managed), and I feel like with LLMs, it's become extremely easy to read & write deployment configs.

What's your thoughts about adopting K8s as infrastructure early on (say, when you have initial customer fit and a team of 5+ engineers) and standardizing around it? How early is too early? What pitfalls do you think still exist today?


  👤 delichon Accepted Answer ✓
Same question. I'm a one man band who wants to be scalable, but doesn't want to get married to a particular cloud. So Kubernetes appears to be the default recommendation. Are there better alternatives?

👤 tnjm
While I wouldn't dream of standing up k8s on a bare metal cluster without a devops team, I set up managed k8s using EKS several years ago for a client and... it just chugs along, self-healing, with essentially zero maintenance.

For my own projects I use a managed Northflank cluster on my own AWS account and likewise... just a fantastic experience. Everything that Heroku could and should have been. Yes the cluster is a bit pricey to stand up both in terms of EC2 compute and management layer costs, but once it's there, it's there. And the costs scale much more nicely than shoving side projects onto Heroku.

At this stage I consider managed k8s my default go-to unless it's something so lightweight I just want to push it to Vercel and forget about it.


👤 signal11
k8s isn’t worth the time and money for many small teams, until they cross a complexity bar.

Even in some very non-startup enterprises, Cloud Foundry and Open Shift get adopted for a reason: some teams don’t need the overhead.

For startups there’s fly.io, render.com, and of course Heroku, but really — you can get from MVP to pretty decent scale on AWS or GCP with some scripts or Ansible.

Use k8s if you need it. It’s pretty well-proven. But it’s not something you need to have FOMO about.


👤 ruuda
It depends of course, but probably Kubernetes is a solution to problems that you don't have, while it creates new problems that you don't currently have.

👤 zekrioca
It seems to me the issue is not really setting up and building around K8s as an infrastructure orchestrator, afterall k8s sells itself as a cluster API which is the de facto standard nowadays. The issue starts when you need to handle very specific use-cases, e.g., security. This requires very low-level experience not only with K8s, but with the whole stack (including OS + HW) + knowledge of safe resource and application scheduling, which is hard to find talent for.

PS. Edit for clarity.


👤 languagehacker
If you've already done the work figuring out how to knock out a basic web app deployment of kubernetes for a project that you think will grow, then I say go for it. It's not much cheaper than buying a reasonable minimum amount of compute with a company like Digital Ocean.

For hobby projects that I don't really plan to scale, I've recently gotten back into non-containerized workloads running off of systemctl in an Ubuntu VM. It feels pretty freeing not to worry about all the cruft, but that will bite me if something ever does need to live on multiple servers.


👤 Ethee
The answer, as with everything, is going to be that it depends on your situation and use-case. If you have a bunch of engineers who are already familiar with K8s then using a different implementation just because others told you to doesn't make much sense. But if you're choosing K8s because you want a good future foundation, in dreams of 'scale', then you should stop and really consider what it is that you need from K8s. Most people I've seen who's infrastructure succeeds with K8s only moved to it as a necessity, usually away from some monolithic structure, they didn't start there. Build what you need, and only that much, not for some future need that might not come.

👤 ebiester
The biggest reason Kubernetes should be a big no-no is because you should have a much simpler architecture (monolith) that doesn't need K8s.

👤 vorpalhex
What does it provide you?

Maybe you need a cluster per client and k8s is the only option.

Maybe you literally only need a few docker services and swarm/ecs/etc are fine forever.

What is the problem that K8s solves for you?


👤 richwater
Those same cloud providers usually have simplified container deployment mechanisms as well. You don't need Kubernetes to deploy containers.

👤 Dedime
I'll add my opinion as a DevOps engineer, not a startup, so take it with a grain of salt.

* Kubernetes is great for a lot of things, and I think there's many use cases for it where it's the best option bar none

* Particularly once you start piling on requirements - we need logging, we need metrics, we need rolling redeployments, we need HTTPS, we need a reverse proxy, we need a load balancer, we need healthchecks. Many (not all!) of these things are what mature services want, and k8s provides a standardized way to handle them.

* K8s IS complex. I won't lie. You need someone who understands it. But I do enjoy it, and I think others do too.

* The next best alternative in my opinion (if you don't want vendor lock in) is docker-compose. It's easy to deploy locally or on a server

* If you use docker-compose, but you find yourself wanting more, migrating to k8s should be straightforward

So to answer your questions, I think you can adopt k8s whenever you feel like it, assuming you have the expertise and are willing to dedicate time to maintaining it. I use it in my home network with a 1 node "cluster". The biggest pitfalls are all related to vendor lock in - managed Redis, Azure Key Vault. Hyper specific config related to your managed k8s provider that might be tough to untangle. At the same time, you can just as easily start small with docker-compose and scale up later as needed.


👤 time4tea
You can run docker swarm easy peasy. Its not that trendy, but anyone can manage it, and you can migrate to k8s later if you need to. Of course it doesn't do some of the things that k8s does, and thats why its less complicated...

👤 strls
If PaaS or some "run container as a service" setup can work for your use case, I'd probably go with that. It takes care of many things K8s does without all the baggage. Also you are not investing into anything that doesn't port easily to K8s in the future.

On the other hand, if you are thinking of using bare VMs, then better go with managed K8s. I think in 2025 it's a draw in terms of initial setup complexity, but managed K8s doesn't require constant babysitting in my experience, unlike VMs, and you are not sinking hours into a bespoke throwaway setup.


👤 Nextgrid
If you need to go beyond what a single bare-metal server can offer, then consider it.

But don’t discount bare-metal first! I see a lot of K8s or other cluster managers being used to manage underpowered cloud VMs, and while I understand the need for an orchestrator if you’re managing dozens of VMs, I wonder - why do you need multiple VMs in the first place if their total performance can be achieved by a handful of bare-metal machines?


👤 byrnedo
You could use https://github.com/skateco/skate and graduate to k8s later.

Disclosure: I’m the author of skate


👤 lucideng
imo, really depends on what you're doing, what your team's skills are, growth trajectory, money, etc. if you need to scale a ton of compute up and down, k8s might be a good fit, but for most startups, it's using a sledgehammer to drive a finishing nail.

* how much downtime can be tolerated during a deploy or outage? load balancing and multi-region is more $$$.

* if you have a bunch of linux nerds and an efficient app -- a nginx webserver + your app + Postgres DB and Ansible to manage a single VM with Cloudflare in front of it might be a good option. Portainer in the VM is nice if you want to go with containers.

* if you have a bunch of desktop devs, containers and build pipelines with PaaS are a good option. many are resilient and have HTTPs built in.

* the smaller your infra/devops team, the more i would leverage team knowhow and encourage PaaS offerings.

* the smaller your budget, the more creative you need to be (ec2/storage accounts as part of hosting, singular monolithic VM has relatively flat costs, what free stuff do i have on my cloud provider, etc)


👤 adamcharnock
There are some excellent comments here, so I'll just add my particular flavour.

I think using Kubernetes effectively in 2025 is more about what you _don't_ use than what you _do_ use. As an early stage startup you can get a long way with no RBAC, no network policies, no auto-scalers, and even no stateful workloads. You can use in-cluster metrics metrics and logging before you need to turn to Prometheus, Loki, etc. Use something managed like AWS EKS.

Try to solve your problems first by taking away, and only if that isn't feasible then start adding. Plain old Deployments will get you a long way.

Now this next bit is going to sound like a pitch, and that's because it is – but when those free credits start running out, your bill starts reaching mid-four-figures, and you start thinking about your first DevOps hire, _call us_. Just for 30 minutes. We can migrate you out your cloud infra and onto a nice spacious bare metal k8s cluster, and we'll become your 24/7 on-call DevOps team. We'll get woken up in the night when things break, not you. And core-for-core it will cost a lot less than AWS.

The fact that we can do all that is a testament to how expensive AWS really is. K8s is a good choice if you keep it simple, positions you well for growth in the future, and for a cluster under a couple of hundred cores it is going to be pretty economical to run it in the public cloud.

PS. Link in bio


👤 atmosx
I've seen it work. I’ve managed EKS clusters for small teams myself, so it’s definitely doable.

The real challenge isn’t setting up the EKS cluster, but configuring everything around it: RBAC, secrets management, infrastructure-as-code, and so on. That part takes experience. If you haven’t done it before, you're likely to make decisions that will eventually come back to haunt you — not in a catastrophic way, but enough to require a painful redesign later.

P.S. If your needs are simple, consider starting with Docker Swarm. It's surprisingly low maintenance compared to Kubernetes, which has many moving parts and frequent deprecations from cloud providers. Feel free to drop me an email. I can share a custom Python tool I've written long time ago to automate the initial setup via the AWS API.


👤 davnicwil
When it works, it works well. Just don't spend any innovation tokens messing with it. Weight it likely that you will end up spending a bunch of time discovering its corners if you don't already know them.

Same goes for all tech choices. If you already know it, you understand its pros and cons and it still seems like the simplest best option for the concrete thing you need to build right now, use it.

Otherwise, use whatever alternative tech fits that description instead!


👤 therealfiona
After 7 years, and me wanting to move off EKS since I got the job 4 years ago, we are moving to ECS (I rose to power of Lead recently, but my engineers also thought it was a great move as they're sick of all the K8s BS).

The time sink required for the care and feeding just isn't worth it. I pretty much have to dedicate one engineer about 50% of the year to keeping the dang thing updated.

The folks who set it all up did a poor job. And it has been a mess to clean up. Not for lack of trying, but for lack of those same people being able to refine their work, getting pulled into the new hotness and letting the clusters rot.

Idk your workload, but mine is not even suited for K8s... The app doesn't like to scale. And if the leader node gets terminated from a scale down, or an EC2 fails, processing stops while the leader is reelected. Hopefully not another node that is going down in a few seconds... Most of the app teams stopped trying to scale their app up and down because of this ...

I would run on ECS if AWS was my cloud at a start up. Then if scaling was getting too crazy, move to EKS.

But for the love of God ... Keep your monitoring and logging separated from your apps. Give it its own ECS cluster, or buy a fully managed solution. It is hard to record downtime if your monitoring goes down during your K8s upgrade.


👤 xyzzy123
There aren't really any huge gotchas imho in 2025, just watch out that you don't get sidetracked delivering awesome developer infrastructure (preview environments! blue/green! pristine iac! its fun!) if there are actually more important things to be working on (there usually are).

At early stage the product should usually be a monolith and there are a LOT of simple ways to deploy & manage 1 thing.


👤 fragmede
Just go all in on Vercel unless you have a particular need for extra backend computing. You haven't given any details that would lead us to believe that's inappropriate, so I'd start there, add Neon, and iterate on the product that you're selling, rather than over engineering unrelated pieces. Unless that is where the product you're selling has efficiencies no one else has.

👤 JustExAWS
Why would you need Kubernetes with a startup instead of in the case of AWS just using some EC2 instances, a load balancer and an autoscaling group and a monolithic application? For your backend just use a hosted version of MySql/Postgres/ElasticSearch ?

It’s simple, no “cloud locki in” (which is really over exaggerated). The only reason to use K8s is for resume driven development. Which honestly is not a bad idea in an of itself because your startup is going to statistically fail and you might as well use the experience to get another job.