By cluster, I mean the servers that are serving the website or mobile application.
I have been asking a few companies and it seems that some small companies (5-10 developers) have one important service that they develop independently like a monolith on one VM/container/desktop. The rest of the services(ingress, load balancer, autoscaler) are just scaffolding to put the SaaS(or some other product) together around that one monolith. And I am guessing there are other companies where the product is so big (Google Workspaces) that one cluster per developer is very cost prohibitive.
It would be great to know how different companies do it.
For instance we currently reuse environments but are instead working on a way to build a queue of fresh environments that rebuild themselves after you are done with it so that developers always get a clean cluster when they request one.
I will say that we almost never run into problems due to environments being shared between developers - the “clean room” approach is more for developer confidence and feeling good than anything else.
The industry gold standard for any given software application that will be deployed to the cloud is:
- Define infrastructure as code (terraform, AWS CDK, etc.)
- Have automated mechanisms in place to deploy self contained pre-production and production environments of the above infrastructure
- Allow developers to create a “personal” deployment of the above, similar to a pre-production environment but with any changes the developer wishes to make as part of development. This should be as simple as running a few commands.
Some harnessing that allows for completely local development of portions (or all of, depending on the software application) of the above can also be desirable, to speed up development.