For more simple workflows, e.g. a single API endpoint, there's serverless and other SaaS services out there which will let you build off from that, and lets you save a huge amount of time and money compared to building it traditionally with a Web framework.
Now that we've had a few years of experience with how people are actually using the platform , I could see a simplified version of Kubernetes taking hold for businesses still running complex sets of services, something that's easy to install, actually comes with batteries included and is secure by default.
For me the sweet spot is monolith(-ish) 12-factor applications packaged up in containers. In this setup I can just `docker-compose up` my dependencies (postgres, redis, rabbitmq, other services, ...) and run the application from my IDE against those (so I can use the debugger instead of printf-to-cloudwatch). For production I can package the app in a container and deploy to a container orchestration platform (Kubernetes, ECS, or something else).
To answer your question, what comes after Kubernetes?: I'm hoping for a platform that is somehow a consolidation of the good ideas that we've seen in Terraform/Pulumi/Kubernetes/Cmake/DockerCompose/Swarm. I want to write a portable idempotent "deployment build script" that I can apply against a cloud provider or bare metal or localhost in a similar way. With good support for different configurations depending on the environment (like C ifdefs or cmake options).
For example: when I apply the script against my localhost it spins up a postgres container, when I apply the same script against my AWS account it configures an RDS postgres instance. Both invocations will pass along the connection string to dependant services.
Basically morph docker-compose.yml into a portable Cmake for container orchestration.
I think this will be driven by a few trends:
- Very fast disks to store data SQLite style, auto replicated transparently at the disk level.
- Typed languages becoming more usable (tooling and language improvements), which makes it easier to design and operate systems (stack traces vs distributed debugging/tracing).
- Improvements in Heroku-like systems from AWS/GCP etc.
I know there are great terraform modules out there already, but i wonder if many of Hashicorp's products soon will be available as "managed" services as well, allowing people to try something new with less maintenance cost.
Frankly I think the serverless model will end up dominating in the long run. Whats really missing there isnt a better abstraction for k8s. Its the dx workflow and local development experience that still requires lots of work. Emulators for services, workflows, good ide integration, open service standard. That will all be where the next layer happens.
However, Docker has some downsides:
(1) It centers the unit of computation in the Operating System layer
(2) Docker relies on a completely homogeneous infrastructure (same chipset, almost same Operating System).
(3) It relies on a Virtualization layer (usually KVM) to be run securely
I believe (1) caused a vast amount of complexity when scaling, since you are scaling "OS Instances", not applications or functions, which caused Kubernetes to inherit most of this issues and be perhaps much more complex than it should.
I think (2) is key because with new trends coming up (such as Edge Computing, Serverless, IoT and 5G) we are starting to have the need of running software universally across a more diverse infrastructure (different chipsets).
And (1) and (3) causes a significant slow-down for startup times (see Firecracker state of the art project with a ~250ms best startup time - not good enough).
I believe most of this issues are solved with WebAssembly: simpler scaling, universal execution across diverse infrastructure and fast startup times. And I'm actually working full-time in Wasmer [1] to make of this vision a reality! :)
I think the most likely outcome is that K8S continues to entrench itself and essentially nothing ever displaces it on its own turf. Eventually, the game will move to some other turf and the process starts over.
My statement here is about general trends, and not a full explanation of how all software lifecycle works. My point here is more that if you look more carefully, it is not the case that everything everywhere is always in constant churn, therefore you can safely assume that K8S is on the verge of being displaced. The antecedent is not as true as it may appear on the surface. Under the boiling froth, there are many more things in the software space that are relatively stable.
There's been a few popping up recently, time will tell who wins
For the next 10 years, I don’t see kubernetes going away but rather evolving into something that’s dead simple for anyone to setup and use easily. It’s flexibility in being a platform capable of easily running whatever you want to is somewhat unmatched.
Honestly the only way I see k8s being replaced is by a fork if there is some Governance dispute. I don’t see the platform changing for at least 10-20 years.
Things that are interesting about it:
- any device and any workload from IoT to servers.
- totally new security model from the ground up which is how I expect it to be the main driver for business use cases
- new app delivery model which is kind of like a weird mix of the web as you know it currently, native apps and kubernetes.
- seems to be developing an interesting interop story where the goal is that you will also be able to run Android and linux apps on it despite the fact that it’s not Linux and not based on Linux. Everything is custom made from the ground up.
- the development story is also super interesting, all IPC happens over a gRPC like protocol and provides a nice language neutral way to develop for the platform and to integrate with everything else.
Some of the other projects I’ve seen coming out of Google make way more sense in this context too. Flutter for web is a good example which I know is not exactly a crowd favorite here on HN but I’m serious when I say that now might be a good time to go and look at Dart again. It’s actually a really nice modern language if you are coming from JavaScript, Typescript or Java you will be up and running very quickly.
I think this is about to become a hugely disruptive force. If Google manage to not screw this up that is.
For a slightly longer term bet I’d say by the time it hits say v5 it will be the biggest OS / platform in the world by a considerable margin.
Why do I need to build a container image when I can just list my dependencies and provide some code to run? For example: "I need Programming_Lang version 4.14, Library_A version 2.2, and Library_B version 1.5".
I don't care what the underlying operating system / system libraries are as long as it is fast and secure.
I just need to run my code. Why should I need to manage the rest as long as it works?
I know this is already possible, but it's not very easy, especially across laptop, cloud and baremetal; Kubernetes on baremetal is a pain (no simple ingress if you don't have access to the router or loadbalancing).
I hope someone tells me this exists but I tried a lot of solutions and they all fail at something (complexity / dependency on big cloud providers / dependency on specific network capabilities etc).
My bet: rich fat clients. Super powerful phones that can run your entire e-commerce monolith to process my checkout. I’m talking about hosting your backend (most of it) and DB on my phone. Cheap syncs from my phone to your “main” server is fine because I’m always online and I have 10G connection. This will also force a shift in the way we develop programs.
So you no longer need to deploy and maintain hundreds of services. You’ll only need to deploy and maintain one that gets synced to millions of devices (no app stores) rather fast (because, yes, a new compression algorithm will emerge and your whole monolith is now only a couple of megabytes).
- More people using multiple clouds. Because putting all your eggs in Amazon's will start to be less acceptable, for cost reasons and lock-in reasons. Kubernetes was supposed to be a neutral interface but I don't think that has panned out. I'm curious if anyone actually uses Kubernetes in a cloud-portable way.
- Less reliance on cloud SDKs and cloud-specific features, and more open source tools. Less reliance on Docker (which is rightly being refactored away.) I hope to do something with Oil here: http://www.oilshell.org/blog/2021/04/build-ci-comments.html .
- In case it needs to be said: more usage of the cloud. It does make sense to offload things rather than host your own, but right now it's still in an awkward state.
Good thread from a few months ago about problems with serverless dev tools: https://news.ycombinator.com/item?id=25482410
Good list of obstacles and use cases for serverless: https://arxiv.org/abs/1902.03383
No more containers to develop or to deploy, and eventually, no UNIX filesystem around the runtime.
I don't think the replacement for kubernetes will be something equal or even-more-lower level (more barebones, like nomad). It would be something higher level, enabling more features, not just equal.
What next could be a proper, self-hostable PaaS. There are a few out there, but most are either closed wall (fly.io, heroku, app engine, beanstalk) or self-hostable but complicated or not easily scalable (cloudfoundry, etc).
In a way, Kubernetes also did the same thing that most of it's predecessors did. But the main difference was – it offered a common low-level abstraction of APIs and operators which allowed a lot of solutions to be built on top. It was not just a CaaS, it was also the "standard model" to run things underneath. The unit of the model was always a "workload" or container.
Similarly, the next PaaS could also do the same thing as solutions today – but if it becomes the new "standard model", where the core unit of the model is an "application" (not just a container), it would be amazing. Deploy applications with hundreds of standard, open ended plugins like distributed tracing, etc. Open ended heroku.
NixOS is weird and Nix language is super weird but the concept is powerful and the dev community can benefit tremendously from it
If Hashicorp does substantial work to put the pieces together on a full Hashistack deployment to compete with K8s that will be a good option, too.
Also, there's the option of just using cloud stuff instead of an orchestrator. You could use AWS ALB and autoscaling groups to do much of the same thing, and even manage the infrastructure in Typescript or Python with both Pulumi and CDK (or just use TF or CF)
Disclaimer: I have not Kubernetes experience, show a few documents and opted for Nomad with great success so far.
I can't wait for the next layer of complexity to be added. It will no doubt be called something like WizzlyBobATron.
By that I mean, entirely API driven, declarative, etc.
But maybe monoliths are the way, as many commenters are hoping for, because sure, distributed systems are hard, no matter how nice the abstraction.
Applications will probably be flatpaks instead of containers. With some config for required data stores, NICs, port openings, CPU and RAM requirements.
In short. Plan 9 using Linux tooling.
Also k8s adoption standardizing the visibility and forcing security to all (ssl everywhere, encryption, authentication for each request, etc).
That mass organizational evolution in enterprises opens up a lot more simplicity for just-containers or bespoke scaling strategies. Go ahead and run micro, nano, macro, or mega services. Don't be limited by docker "conventions" on size or complexity.
Of course serverless was the natural evolution to k8s, but the "DC OS" is still a very very very nascent thing that is far behind something like a POSIX standard or anything like that for portability. Serverless is all lock-in right now. k8s was nice in a way because at least it was SOMEWHAT non-lockin as an architecture/framework. If you squinted. Hard.
Standards and portability breed true flexibility and good tools, and we probably need a lot more of that in the cloudrealm.
Virtual private servers, but suspended/resumed/scaled for you without you needing to worry about the underlying method of deployment. AWS lambda + aurora but running actual operating systems. You want more threads or memory - they're there for you. Every user process is metered and monitored. Charged by cpuops + memoryops + iops + hdstorage bytes by the second. You never need to worry about how much disk you need, it's there, unlimited except by your pocket book. You never need to worry about how much compute you need - it's there, unlimited by your pocketbook. You never need to worry about how much memory you need. Backups of your drives are automatic.
All data is encrypted to allow it to live amongst all the other tenants' data. All wire traffic is encrypted.
It works exactly like ec2, except it's on-demand usage and overprovisioned and multi-tenant. Alarms and user-set limits for price and scale are respected. The only thing that will leak about the abstraction is that you'll have to mark processes as suspendable and dependent. You don't need to run your metrics collector if your app isn't running. Cron jobs should wake up the server, etc. Ssh/scp just works. You get service discovery out of the box, and your point of entry is the app lb dashboard. A real, on-demand, virtual private cloud.
What we need is an abstract layer to automate the systems underneath so we can leverage that abstraction and create more complexity
There are still some folks who understand the abstraction all the way up and down. We cant have that
if the lesson from k8s is that after containers we needed container orchestration - the business logic analog is that services will need service orchestration.
(disclaimer: i work here so i'm both biased and have skin in the game)
Then, hosting those apps in some form will likely need to adopt AI-centric workflows in some way. We might even see AI-driven request routing and AI-driven WAF features at some point, too.
And perhaps geographical regulations will become so demanding that you need something like durable objects to store & process data in country of origin.
That will come, but it’s a ways out and difficult to see. Most of what we’ll see are different abstractions that will utilize linux under the hood. Same with kubernetes. It’s not that a new kind of OpenStack is going to usurp it, it’s that what comes next will be built on top of it as it becomes mroe and more the norm… until the next major computing revolution. (completely decentralized storage on phones a la pied piper? quantum computing?)
However an alternate perspective is the microservices were not micro enough to make it worth the overhead + cost. My opinion is the reason this none of this feels great is because a container is not small enough of a base unit to enable you to completely forget about infrastructure.
That's why I'm betting the next paradigm that gains traction is fully serverless architectures. The overall direction things have been going for decades is to make hardware more invisible and I think we finally get pretty close with serverless.
Why don't do the same for a typical Kubernetes setup with Hasura, API server, Gateway, load balancer, Postgresql, Redis, and a couple of services for logs and monitoring all in one, with all the networking stuff already setup? Just give me an image or template made by someone else, maintained on Github, and ready to customise and develop for my needs.
We're still waiting for that magical band to appear.
- Supervisor processes. - Small process size. - Failure friendly design.
All of which allows easy vertical / horizontal scaling, robust concurrency and scalability, and automated process management.
Seemed promising the way WhatsApp / Discord have used this. Obviously not a classic DevOps deployment or a direct competitor for Kubernetes. But doesn't disruption happen from the sidelines?
There will be languages to design distributed systems without having to manually design each component for it. The language rather allows generating interfaces which generate a monolith while certain parts are generated as microservices to scale.
What most backend devs want is to write code and stick it behind a URI endpoint and make sure it keeps running with networking, auth, security, monitoring, scaling, and reliability taken care of by someone else.
They might even allow on prem edge compute. Why not have a device in each employees home or a few devices at the office.
One way to do it is you have semi-imperative code that runs, the output of the code is a description of the system to be deployed. Then you have some kind of diffing system that figures out how to take your existing cloud deployment and turn it into the new version described by the output of your code.
This is how Pulumi works for example.
You want a collective that nodes can be added to simply, with compute, state, storage and networking to be fully distributed autonomously.
I imagine being able to address an object via some hash (IPFS style) rather than networking kludgey.
I had great hope in google Native Client [1] when it came out.
see https://www.nextplatform.com/2019/02/20/google-wants-cloud-s... for better explanation from Google's Urs Hölzle
I’m happy with compose. It works. I know swarm was supposed to solve this but is dead now (?).
I upload a chart or compose about the cluster and kubernetes just happens.
No tweaking, no command line - just a GUI.
Hopefully we have more iteration on configuring k8s. I'm not convinced we'll so easily go back to managing instances with monoliths.
Kubernetes is a thing now, but it's patterns are still underspoken of, underpracticed, underdeployed to the rest of the software world. We will get better at being like Kubernetes, for great benefit. Folks learning how control-loops are advantageous, and learning to use universal API Servers for all their systems will continue to drive not just Kubernetes, but the patterns underlying Kubernetes further into our applications & services & technologies. Tech like KCP[1] is an early indicator of this interest, in using the soul of kubernetes if not it's specific machinery, by creating an independent, non-Kubernetes, but Kubernetes compatible API Server. Having universal storage, having autonomic system control are huge advantages when system building, and gaining those benefits is fairly easy, with or without Kubernetes itself.
I'm hoping we see a DIY cloud'ing become more of a thing. Leaving everything in the hands of hyperscalers is a puzzling and hard to imagine state of affairs, given the hardware nirvana we've experienced in the past two plus decades. Kubernetes is the first viable multi-system operational paradigm, the first diy-able cloud we have, and it's shocking it took that long, but smaller practitioners getting good at converting their sea of individual boxes into something more resembling the Single System Image dreams of old, albeit through a highly indirect Kubernetes-ish route, is a decade or decades long quest we seemingly just started in to a couple years ago.
I'm hoping eventually ubiquotous and pervasive computing starts to dovetail with this world, that we start to have better view & visibility of all the computing resources around us, via standardized, well known interfaces. Rather than the hodgepodge of manufacturer controlled, invisible, un-debuggable overlay networks that alas, constitute the vast majority of the use of the internet these days. Alas the news there is never good, the new Matter standard is, like Thread, inaccessible, unviewable; consumers are expected to remain dumb, ignorant, unaware of how any of it works, merely thankful for whatever magic they receive[2]. But as a home-cloud, as the manor re-establishes computing as base & competency for itself (#ManorCompute), & as good projects like WebThings[3] or whatever takes it's place light the darkened damp corridors only robots patrolled, I hope for a reawakening, hope a silent majority becomes more real & known, hope the fed up, sick of this shit, disgusted with remote-service-based technology world starts to manifest & apply real pressure to emerge a healthy, pro-human, pro-user ubiquotous & pervasive computing that gives us an honest shake, that shows what it is, that integrates into our personal home clouds.
I think there's a huge pent up demand & desire for flow-based/evented systems, for Yahoo Pipes, for Node-RED[4]. The paradigm needs help, I think there's too many missing pieces for something like Node-RED to be the one, but re-emerging user-control, giving us ALL flexible means to compute, is key. Exposing & embracing some level of technical literacy is something people want, but no one knows how to articulate it or what they want. We're mired in a "faster horses" stage, and it's fundamentally incorrect.
Last, I have huge hopes for the web. There are incredibly awesome advances being made in the range of peripherals, devices, capabilities the web supports. The web can do so much more. We're barely beginning to use the cutting edge ServiceWorkers, barely beginning to use Custom Elements ("WebComponenets"), and these aren't even that new any more. These are fundamentally revolutionary technologies. Things like File System Access just came back on the scene after over a decade of going nowhere. Secondary screen working group is tying together multiple systems in interesting ways. There's a lot of high-tower shit, in WebAssembly (especially when Interface Bindings starts to allow real interop with JS), in TypeScript, but to me, I think rather than just building up up up there's some very real re-assessments we ought to be making about how and what we build. Trying to make self-documenting machines, trying to make computing visible, these aren't concerns of industrial computing, but they are socially invaluable advances that have been somewhat on hold in the age of Pax React-us, and we're well over half a decade in & while there's endless areas to learn about, improve, get better at in this highly industrialized toolset, I want to think there are some slumbering appetites, some desires to re-assess. I'm a bit afraid/scared of WebAssembly being a huge tower-of-babel time-sink/re-industrializing-focus that distracts from the need for a new vision quest, but I have hope too, I see the yearning. Albeit often expressed in lo-fi counter-culture, which to me is a distraction & avoidance, rather than the socially empowering act that has been a quiet part of the web's promise[5].
So I have a lot of tech's that seem promising to me. I want to just leave off somewhat with where I started, which is about communities and winners. Whatever happens, for it to go gangbusters, it needs to be accessible. It needs to be participatory, allow a rife ecosystem to grow up & flourish around it. VMs, Docker, Kubernetes, these examples each spun up huge waves of innovation around them. They pass the Tim Oreilly "Create more value than you capture" test, which is core to the tech metastatizing from a specific technology into a wide social practice, into something deeply engaged with by a wide range of users, each testing & advancing various frontiers of the idea, of the tech. Tech that can't seed & keep vital it's ecosystem ossifies & becomes boring. Tech that can grow a healthy community of adept, knowledgable, driving practitioners has a chance of gaining the social collaboration, social presence, to matter & become a new pivot, has the chance to leave a real mark. Each of the techs I've mentioned struggles with an industrial vs social good problem, struggles to become free enough, to matter enough to become interesting again, but I think we're in a much better place than we've ever been to take any one of these- diy clouds, ubicomp, flow-based systems, the web- to the stars.
[1] https://github.com/kcp-dev/kcp
[2] https://staceyoniot.com/project-chip-becomes-matter/ https://news.ycombinator.com/item?id=27123944
[5] https://webdevlaw.uk/2021/01/30/why-generation-x-will-save-t... https://news.ycombinator.com/item?id=27083699
The mental model of capabilities is something a 5 year old can grasp, like taking a dollar out of a wallet, the most you can lose when you give it to the child is the dollar.
You can't give a task N cycles, and these 4 files ONLY on any of the current round of frameworks. This is 50 years overdue.
Realistically VMs, the toolsets are very mature, and it was an incredible waste to spend so much time getting them to work so well and then dumping all of it for containers. Which has you starting all over at zero again due to the lack of maturity, tooling that is all over the place and requires tons of elbow grease to integrate and make work for your use case.
No, really. It's a cycle.