We want to know: what kind of dev tools would make your Kubernetes development experience better? It can be a tool to simplify deployments, streamline cluster management, enhance scalability, or something completely innovative. All ideas are welcome. TIA!
Instead of asking people for their dev tool ideas, ask them what problems they have with Kubernetes that aren't yet solved well for them. With that information you can iterate on dev tool ideas that could potentially solve those problems. In my experience people understand their problems better than the potential solutions to those problems, and right now you're asking people for their solutions.
Since the infrastructure already knows (and controls) this information, it would be the ideal place to actually generate a signed attestation that a pod is who it says it is.
1. Have a central location for our helm charts so that we have one copy of our charts with separate values for our various environments.
2. Have tight controls around who is allowed push what where (allow devs to push to the dev environment, allow team leads push to QA, etc.)
Separately each goal is easy to accomplish, but if you want both, it seems to break the GitOps paradigm. Multiple branches or repos means you now have helm templates all over the place and quickly you get drift. Having one branch and repo consolidates your templates, but now you need to either allow any dev to change any value for any environment, or you have to slowdown your devs and gatekeep the repo and only those who are allowed to make prod changes are responsible for merging in ALL changes to every environment.Ultimately it seems like the best compromise is you end up with multiple deployments of Argo/Rancher (whatever CD tool you have), which monitor your helm charts pointing at least two separate repos, one that is for non-prod, and another that is prod.
It's very difficult to have standard, company (or org) wide "templates" for creating resources. And I don't mean literal templates.
In my ideal world, everything would be defined within Starlark. And we could build our own abstractions. Do we want every resource to be associated with a team or cost-center label? Cool - this is now a required argument for the make_service() function.
Even with Kustomize, there's a lot of context switching needed to understand fully what resources a simple service is comprised of. And depending on the team, you don't need to know all of them. But because they are files crafted by hand — or at best, initially generated by a common template and modified by hand later — it's extra stuff to pay attention to.
What tools I need as a jr dev with no docker experience vs a dev ops admin is going to be vastly different.
As a software engineer who has tried to build a business with their own app as a solo dev using Docker/Helm/Kubernetes to host in DigitalOcean I found all the tooling super complex and time consuming to learn.
Frankly my "job to be done" was to publish an app to a remote server and if it crashed automatically restart. Hiring someone to set this up didn't help much because once they set it up, I needed to run it on my own and it was a lot to pick up in addition to the other 4-5 technologies I wanted to utilize.
I honestly don't care about serverless and I don't want to learn yet another technology, I just wanted to publish my damn app, focus on new features and try to make some money. This was easier with a VM, but you have to share that with another app that might not play nice and take the whole server down. Then the site is down and you won't know or can't restart it until you get off work.
If you can build a tool where I can take an app and push to a server with https enabled, without being an expert in kubernetes or helm or whatever. That'd be awesome.
Most of what people need in k8s-land is not one small tool, but a complex intelligent solution that (today) requires a well-trained human to do analysis and then figure out some action to take. The automation that should be there by default but isn't. So you could start by asking people about problematic or time-consuming situations they face in k8s, and automate that.
Prd has high traffic during the day; low traffic at night. Pods must be manually right-sized based on the model and expected traffic. Each model gets its own pod. 20 models = 20 pods = 20 cpu and ram configurations per env.
Stg has low traffic (50 queries per day). Hosting a container per pod is expensive. Stg hosting per pod costs ~20% of the production cost (if prd cost $10k/mo, staging costs $2k/mo to service a 1% the traffic).
I think this can be fixed in the application layer with 2 very different configurations per env, but it would be nice if this was abstracted away from us so we could focus on tuning the models, not configuring proxies and hosting environments.
- A kubernetes YAML explainer playground where you can paste a YAML file in and it explains what the YAML does.
- Diagram generation from Kubernetes namespace(s)
I am interested in devops tooling. The space is incredibly complicated and I find other people's workflows and tooling to be confusing.
I wrote a tool that configures terraform/chef/ansible/packer/shellscript and other tools by graph/diagram files: https://devops-pipeline.com/ It's not ready for usage by other people but the idea is there.
If you could make configuring traefik, istio, service mesh, sidecars, storage easier that would be amazing. I am inclined to run Postgres outside of kubernetes.
I think one of the big pitfalls teams using Kubernetes is the level of abstraction you are working with. With teams that have expertise, working with Kubernetes directly can work well … but if you are asking about developer experience, Kubernetes is the wrong level of abstraction. You really need an application platform built on top of Kubernetes — CloudFoundery is exactly that, but it has taken them a while to build something using native k8s resources.
Using native k8s resources is important. You can make an opinionated workflow, but if you know what you are doing, you can still take it apart and reconstruct it into something specific for the team that will be using it.
However looking at the pods and checking their env vars has helped me a bit.
Maybe an Awesome Kubernetes is what we need
Best way to deal with Kubernetes