HACKER Q&A
📣 KickW

What tools would make your Kubernetes development experience better?


My team and I are ideating open-source dev tool ideas and would love some input from the community!

We want to know: what kind of dev tools would make your Kubernetes development experience better? It can be a tool to simplify deployments, streamline cluster management, enhance scalability, or something completely innovative. All ideas are welcome. TIA!


  👤 redeux Accepted Answer ✓
I understand that this is contrary to your question, but I might suggest a different approach to your inquiry.

Instead of asking people for their dev tool ideas, ask them what problems they have with Kubernetes that aren't yet solved well for them. With that information you can iterate on dev tool ideas that could potentially solve those problems. In my experience people understand their problems better than the potential solutions to those problems, and right now you're asking people for their solutions.


👤 vvladymyrov
K9s - https://k9scli.io/ CLI UI for k8s. Saves me a ton of typing while checking k8s pods/logs.

👤 simiones
One thing I just hit is that there doesn't seem to be any way to ask from a Pod to get a TLS certificate signed by the cluster itself (that would only be valid inside the cluster, of course) that is valid for the IP(s) and DNS names associated by Kubernetes itself with the pod (and to automatically rotate etc such certs).

Since the infrastructure already knows (and controls) this information, it would be the ideal place to actually generate a signed attestation that a pod is who it says it is.


👤 hitpointdrew
I have yet to find a good solution that accomplishes both these goals.

  1. Have a central location for our helm charts so that we have one copy of our charts with separate values for our various environments.
  2. Have tight controls around who is allowed push what where (allow devs to push to the dev environment, allow team leads push to QA, etc.)
Separately each goal is easy to accomplish, but if you want both, it seems to break the GitOps paradigm. Multiple branches or repos means you now have helm templates all over the place and quickly you get drift. Having one branch and repo consolidates your templates, but now you need to either allow any dev to change any value for any environment, or you have to slowdown your devs and gatekeep the repo and only those who are allowed to make prod changes are responsible for merging in ALL changes to every environment.

Ultimately it seems like the best compromise is you end up with multiple deployments of Argo/Rancher (whatever CD tool you have), which monitor your helm charts pointing at least two separate repos, one that is for non-prod, and another that is prod.


👤 coxley
YAML hell.

It's very difficult to have standard, company (or org) wide "templates" for creating resources. And I don't mean literal templates.

In my ideal world, everything would be defined within Starlark. And we could build our own abstractions. Do we want every resource to be associated with a team or cost-center label? Cool - this is now a required argument for the make_service() function.

Even with Kustomize, there's a lot of context switching needed to understand fully what resources a simple service is comprised of. And depending on the team, you don't need to know all of them. But because they are files crafted by hand — or at best, initially generated by a common template and modified by hand later — it's extra stuff to pay attention to.


👤 6DM
My advice from personal experience in the ideation phase is to first build out an ideal customer profile, then solicit advice from those people.

What tools I need as a jr dev with no docker experience vs a dev ops admin is going to be vastly different.

As a software engineer who has tried to build a business with their own app as a solo dev using Docker/Helm/Kubernetes to host in DigitalOcean I found all the tooling super complex and time consuming to learn.

Frankly my "job to be done" was to publish an app to a remote server and if it crashed automatically restart. Hiring someone to set this up didn't help much because once they set it up, I needed to run it on my own and it was a lot to pick up in addition to the other 4-5 technologies I wanted to utilize.

I honestly don't care about serverless and I don't want to learn yet another technology, I just wanted to publish my damn app, focus on new features and try to make some money. This was easier with a VM, but you have to share that with another app that might not play nice and take the whole server down. Then the site is down and you won't know or can't restart it until you get off work.

If you can build a tool where I can take an app and push to a server with https enabled, without being an expert in kubernetes or helm or whatever. That'd be awesome.


👤 0xbadcafebee
Something to go into the cluster, find things that are bad/broken/wrong, give me a button that says "click here to fix this", allows rolling back the change. Something to put in policies, roles, etc for guardrails for common problems, like don't allow a user to create an external load-balancer. Easier RBAC. Easier network policies. I would have said an operator for external secrets, but it exists now (https://external-secrets.io/v0.8.2/introduction/overview/).

Most of what people need in k8s-land is not one small tool, but a complex intelligent solution that (today) requires a well-trained human to do analysis and then figure out some action to take. The automation that should be there by default but isn't. So you could start by asking people about problematic or time-consuming situations they face in k8s, and automate that.


👤 bashinator
Something that would facilitate load testing of microservice containers for optimizing the resource requests and limits.

👤 itake
I am a k8 newb, so maybe this is a solved problem or not a k8s problem, but I wish there was better tools for hosting ML models in stg and prd.

Prd has high traffic during the day; low traffic at night. Pods must be manually right-sized based on the model and expected traffic. Each model gets its own pod. 20 models = 20 pods = 20 cpu and ram configurations per env.

Stg has low traffic (50 queries per day). Hosting a container per pod is expensive. Stg hosting per pod costs ~20% of the production cost (if prd cost $10k/mo, staging costs $2k/mo to service a 1% the traffic).

I think this can be fixed in the application layer with 2 very different configurations per env, but it would be nice if this was abstracted away from us so we could focus on tuning the models, not configuring proxies and hosting environments.


👤 samsquire
- A tool to create a Kubernetes YAML from a "docker run" command :-)

- A kubernetes YAML explainer playground where you can paste a YAML file in and it explains what the YAML does.

- Diagram generation from Kubernetes namespace(s)

I am interested in devops tooling. The space is incredibly complicated and I find other people's workflows and tooling to be confusing.

I wrote a tool that configures terraform/chef/ansible/packer/shellscript and other tools by graph/diagram files: https://devops-pipeline.com/ It's not ready for usage by other people but the idea is there.

If you could make configuring traefik, istio, service mesh, sidecars, storage easier that would be amazing. I am inclined to run Postgres outside of kubernetes.


👤 hosh
- Actually getting CloudFoundery for K8S (korifi) production ready.

I think one of the big pitfalls teams using Kubernetes is the level of abstraction you are working with. With teams that have expertise, working with Kubernetes directly can work well … but if you are asking about developer experience, Kubernetes is the wrong level of abstraction. You really need an application platform built on top of Kubernetes — CloudFoundery is exactly that, but it has taken them a while to build something using native k8s resources.

Using native k8s resources is important. You can make an opinionated workflow, but if you know what you are doing, you can still take it apart and reconstruct it into something specific for the team that will be using it.


👤 amitbakhru
OpenSource version of Lens (https://github.com/MuhammedKalkan/OpenLens) would certainly help.

👤 tekbog
I've been learning k8s this past weeks and the best thing so far has been the GUI elements coming from Docker Desktop. It might not make much sense for experts and at the end of the day you do have use the terminal and get comfortable with it.

However looking at the pods and checking their env vars has helped me a bit.


👤 Reitet00
Having a tool that'd make it easy to run the app locally for development and at the same time have roughly the same files used in production. Docker compose got this mostly right, compare compose with the complexity of running locally service in micro Kubernetes cluster.

👤 cassianoleal
Should have [Ask HN] in the title, otherwise it looks like it's a link to an article.

👤 edwardmcfarlane
Better templating, example https://github.com/emcfarlane/kubestar which uses starlark for kubernetes config.

👤 stcroixx
I’d love for deployment to consist of copying one file to one location and as little time spent as possible learning or understanding anything else about the tool.

👤 hosh
I see that there are often things people ask for, but it turns out, someone had alreadt made it.

Maybe an Awesome Kubernetes is what we need


👤 spion
The fact that you need a container registry drastically increases the barrier for initial adoption.

👤 adql
The negative experience comes entirely from bugs in tools, not lack of features

👤 99112000
rm -rf /

Best way to deal with Kubernetes