As a result, What makes a "team" is it's leadership structure, I.e it's determined by the number of engineers who want to be leaders (which includes accountability and ownership). If you have a team with multiple leader-types who want more ownership, and a natural seam in the solution space emerges to spin-off, they will tend to become the communication hubs for other engineers, and eventually spin out into their own subteam with their own microservice. They'll want their own abity to release when necessary, and choose dependencies without needing to convince everyone beyond their immediate subteam.
In short, if you're 1 person, microservices make little to no sense given the addition work and complexity involved. If you're more than 1 engineer, it depends on individual engineers willingness to be a lead and own a whole part of the system and become a communication hub. Companies like Google and Amazon will have lots of microservices because so many engineers want to be tech leads (either for promotion or self-fulfillment).
Drawing a line in the sand between parts of your service and creating common interfaces between them is not a bad idea, but I'd argue that doing so too early brings in a cost of constantly dealing with the communications within your service.
It is a similar issue to programmers who try to write everything generic from the start or optimize prematurely: it can get in the way of productivity and make your service inflexible in a development phase where it should still be very flexible.
That being said I believe the core ideas of microservices applied at the right time in the right project would do wonders, but just like with blockchain the hardest part might actually be to decide when to use it and when not to.
However, as the application grows, it will most likely encounter scenarios where each microservice needs to do something more than what it was originally intended to do. A service might need information from an API or would need to access additional data from a database somewhere before it can work properly. And if that's the case, these hypothetical APIs and databases might be needed in other parts of the application as well. So now instead of everything being structured nicely with interchangable instances, you are now dealing with a network where any given node might not be able to be updated independently of the others. It can quickly become a large mess.
That aside, depending on the needs of your application and how it's set up, the various microservices can create a much larger cost than something like a monolithic system. Typically, each service would exist on its own instance, and your hosting provider will happily help you figure out your usage needs and the price that will come along with it.
Differences in the maturity of software. You have a mission critical revenue generating core and newer experimental forays into new territory. You want to do this rapidly tolerating more risk but don't want to destabilize you cash cow.
Certain parts have differing requirements. e.g. a payments system may need greater auditing and compliance levels and separating it makes this easier to do.
The other often cited reason is ability to scale the infrastructure independently. This is still true even with cloud offerings, being able to size and configure parts differently can get you a long way into scaling. The alternative view is that you'll need to shard eventually and the sooner you figure this stuff out it can save you from having to implement a lot of intermediate scale solutions. Even with shards, you can still run into reasons to run parts of you system differently. e.g. choice of datastores, storage engines, isolation levels, etc.
I've worked in pretty much all these cases, as well as the let's do microservices from the start. That was hard as we were doing instant messaging and message delivery failure rates are critically important. At the same time, it wasn't unachievable with only a 3-4 pizza-sized number of devs. Not optimal though when the number of services =~ number of devs when you have turnover and have to learn/teach them all. I can say being able to change something and have CI run unit/integration/end-to-end/journey tests in a blue-green deployment to prod in 2 minutes is something I'll always miss.
* compliance requirements? -> (micro)services. you need to host some data and workers in separate guarded environments
* too many legacy third party dependencies which might fail or stop whole processes? -> wrap with service
* different independent products in a company? services
for everything else use monoliths
Unfortunately, what I sometimes see is that it's replaced by multiple interacting single points of failures - i.e., if any one of them fails the whole rube goldberg architecture fails, and there are so many different things that can fail! ...which is clearly worse than just having a monolith.
The issue then becomes that teams don't have a good understanding of the expected impact due to an outage. They think microservices are helping to minimize the blast radius, but they really just don't know what the true blast radius is.
I also think people tend to strive for too much purity. It's okay if you want to avoid the replication and event bus complexity of microservices and choose a monolith-esque database architecture. Just split your APIs across business domain, leave your database as it is, and be done with it.
Just make sure you stick to the fundamental principles of microservices. Scoped, fault tolerant, independently deployable. Fault tolerant does not 100% have to include database failures. Don't ever directly cascade API calls from one service to another.
The biggest "why not" is that you should choose the architecture that best solves the problem. Don't treat architecture as a playground to try out random shit. Solve the business problem in the simplest way possible. In most scenarios you probably don't need isolated, fault tolerant, distributed services. I would guess that 75% of microservices implementations fall under the premature optimization anti-pattern.
However, I will say we've taken some considerations from the microservices discussion to heart and thought more about fault-tolerance, ci/cd, and streamlining the deployment process so we've definitely gotten some value from the zeitgeist.
That said, most of our apps end up being a fairly relaxed monolith with some service integrations here and there. And you know what? It's pretty nice sometimes :)
The promise of test-driven development is that there is a vanishing amount of code that is not under some sort of meaningful test coverage. Those tests have to target the smallest possible parts of the system to be efficient, else tests are just too brittle and slow.
Test-driven development works best for self-contained systems that represent the whole feature. Using it encourages architecturing the system in this way.
There are three environments where it doesn't work: graphical user interfaces, monoliths and distributed systems. The biggest issue with these is the huge state space to test and their comparative slowness. Also, they are usually expensive to set up and maintain.
It's like how they embraced product teams. Yet in the past year it has been a constant musical chair or hot potato situation of managers reorganizing which team owns which app. The whole point of the product team is to give teams ownership and keep expertise with the app. We might as well be passing it off to a support team if they are doing this stuff.
I've split up an application for a project into separate containers with docker-compose. They shared some environment state with a dotfile. I don't have to reinvent the wheel and can just pull some services into it and tag them so I can easily rollback and roll forward. I can offload storage with Docker's storage driver. The services talk through sockets, so it's easy to verify the uptime. The services are implemented in different languages, because I don't believe that there exists one golden language that can solve all problems.
Is that microservices? I don't know, but it works really great and it doesn't feel like there's any overhead.
Another example. I work in a team where two services use JSON schema validation, and they can't be reconciled. Agreement between schemas takes so much overhead, because there are different expectations, leading to 90% of the time discussing schema changes in pull requests.
Yep, they introduce some new problems and by definition, more moving parts but if you're doing anything halfway complicated or "at scale", this is the only sane way to do it.
Docker was hyped a lot but it's a net win for me and for a good chunk of the industry. SPAs with pure API backends are better too. For me and what I do. I love using software defined infrastructure. I love when things are easy and reliable.
I love microservices.
But I also understand why many people don't and why they're a bad fit for some use cases. In some ways, they're kind of like NoSQL databases. If you're just doing microservices because they sound cool and modern, you're like all the folks who use MongoDB and complain that it doesn't work like Postgres. And then talk endless crap about MongoDB.
You're the problem. Not Mongo. Not Postgres. Not monoliths. Not microservices.
There are problems that are shared between monolith and microservice architectures, and there are problems that are different.
For the problems that are different, the ones faced in microservice architectures are considered "harder," and there are fewer battle-tested tools/ecosystems to help deal with those problems (though this second part is rapidly changing).
For the problems that are the same, adopting a microservice architecture will likely force you to solve those problems much earlier. This is dangerous because it takes effort which could be directed to improving your product instead, and you may find yourself solving them in a way which doesn't end up scaling for your organization.
I think they're great, but they require a level of consistent investment in tooling which is not in many organization's DNA.
In real life many companies break their apps in too many small microservices and you end up with a "distributed ball of mud". Which is way worst than the original monolith.
Microservices tries to organize software so that complexity is minimized. In my experience, they completely fail to do so. Rather they actually increase the complexity by putting related pieces of systems far apart from each other. I think it is better to "embrace the complexity" and to concede that software development is difficult. Better developer tools and stronger developers are better methods for dealing with complexity than in vain trying to minimize complexity.
When a company gets large enough people start having time to write software that manages other software. And IMO microservices are a late-stage outcome of this.
Layers upon layers upon layers of abstraction being developed and maintained to help provide autonomy (within a certain context) while working with hundreds of other developers.
For the vast majority of use cases a few VMs or even bare metal cloud instances will be a lot more cost effective and stable.