HACKER Q&A
📣 raghule

How do you go about starting to develop an application for Kubernetes?


I'm curious, what's the typical lifecycle of a developer in the space of K8s?

Imagine this scenario: Your manager said "hey we are taking a bet on Kubernetes, now go figure out how to run our apps there."

Do you:

1) Start by setting up local dev environment, use docker locally, build manifest/helm files and test them in a local minikube, and finally think about now moving your code from local dev machine to a cloud managed K8s provider? (AKS, EKS, GKE, etc)

or

2) Go create a cluster first on the cloud (AKS, GKE, EKS), and then learn that you need a docker file, manifest files, etc from the portal and then use the portal as a way to help develop your app?

or 3) Something else?


  👤 gkapur Accepted Answer ✓
I'd go with 3: 1. First try to think through a Kubernetes-native development workflow. Some tools here would that could be helpful are DevSpace (personal favorite), Tilt, or Skaffold or you could use Okteto for something full stack potentially. Alternatively you could use Minikube or Kind locally but these are more local adaptions rather than K8s/cloud-native. 2. Create a cluster on GKE. 3. Figure out how you want to do CI in your workflow. 4. Profit?

👤 WJW
More like 2 I guess? Tbh running apps on Kubernetes is quite simple provided you have a pre-existing cluster. Running Kubernetes itself is definitely not simple and unless you already have a team skilled at running such things I would always opt for one of managed clusters from the big cloud providers.

👤 mindcrime
My dayjob employer started a massive shift to K8S (in the form of AKS on Azure) a while back. I was pretty heavily involved in that, and can share some thoughts based on my experiences. There are a lot of variables though, so "YMMV".

By and large, your option (1) is closer to what we did, but with the caveat that most of us didn't stay on the "use Minikube" path very long, because it's somewhat limited compared to a real cluster, and can really bog down a developer laptop, etc.

Also, consider that we were coming from a situation where our deployments were to Docker Swarm, so we already had all our apps built for containerization in the general sense, and just had to port things from Docker Swarm to Kubernetes. Not quite the same as starting from scratch.

Anyway, we got automation in place for provisioning AKS clusters and made it easy for developers to quickly spin up a small (2-3 node, with auto-scaling optionally turned on) cluster on AKS and the ability to directly interact with the cluster using kubectl. Most developers quickly started doing all their initial prototyping and testing on a real cluster instead of Minikube.

Early on we hand-wrote K8S deployment yaml files and ran deployments using kubectl apply -f filename.yml. But we quickly moved to Helm and by and large I'd say that was a good move. Most deployments now are doing using Helm, run under an Azure Devops pipeline (but you can still do all this stuff on the local command line if you want, at least in non-prod).

The good news is, you can do a lot of this in something of an incremental fashion. That is, you don't have to use every feature of K8S and every fancy supporting tool right out of the gate. You can start by taking as basic Docker image and just getting it to run and configuring a NodePort for access. Over time you can start integrating Secrets, ConfigMaps, Services as External LoadBalancers, persistent Volumes, Helm, yadda, yadda, yadda.

We also adopted Istio as part of our stack early on, and that has also been a big success for us. Istio provides a lot of facilities that are very handy and/or desirable and has been very helpful for us. The biggest win is the mtls support which makes it easy for us to comply with the (internal) requirement that all "on the wire" traffic be encrypted. But beyond that, the traffic management facilities with VirtualServices have proven quite useful. We're just starting to explore integrating Istio's traffic management with Azure Active Directory using OIDC now. If it works, that's also going to be a big win for us.

Note that we did have to make some code changes, since our old model used the Netflix OSS stack along with Spring Cloud Config, and once we went to K8S we were able to throw out Eureka, Hystrix, and some of that stuff. Spring Boot does have a handy library that provides "native" interop with K8S via the API server to automatically import ConfigMaps and what-not, which is kind of handy if you're a Java/Spring shop.

Last note: me personally, I don't find the Azure Portal to be something I use very much, but I'm biased towards being a command-line guy in the general sense. Most of my interactions with Azure and AKS are through the az and kubectl command-line tools (and occasionally istioctl or helm as well).