To me, the testing process seems quite cumbersome, as it consists in many steps, typically: 1. Creating an external cluster (say in Docker or Kubernetes) from a CLI (or maybe a web interface); 2. Setting up external implemented microservices, using a separate tool (e.g. Kubernetes manifests / helm charts, or docker-compose file); 3. Implementing and building mock containers and explicitly setting up all possible input/output requests; 4. Deploying the containers created in steps 2 and 3; 5. Setting up routing and networking between the main programs and the containers; 6. Running the tests manually or in a test framework (e.g., Jest or Pytest); 7. Creating and running whole-system tests, in the same or a different test framework.
I am wondering if this is indeed how developers typically do integration/component testing for microservices, or if there is a better way?
I am aware that some of this process can be automated using a tool such as Jenkins. I would be very interested to hear about your current approach to automated testing, or if there are tools you would recommend.
Still, I believe the ideal is to provide fast testing "stubs" along with actual services (so they can share interface and validation code, ensuring they never go out of sync) is the way to go, along with a small number of end-to-end tests running against integration/staging and production environments.