Now, we are working a product that consistes of two UIs (eclipse and Web) and a java runtime. It's been developed for more than ten years and we have around 45000 tests in total, which combine UI, system, integration and unit tests. Out of all of them, 3200 are unit tests, so less than 10%.
So over time we adapted a style of testing a lot of stuff with integration and system tests, which makes sense, as we are heavily integrated into several systems around us (two application servers, eclipse, multiple messaging broker, different database backends, ...).
I think it is totally unrealistic to have 80% code coverage for new unit tests and it showed already. Two members started writing tests that covered a lot, but still were green when I went ahead and deleted production code from that covered code. Which means, a lot of stuff was mocked.
Now, I'd like to know three things. 1. Your experiences in a similar area? 2. Is there some scientific background to code coverage and it's outcome? 3. What are some good arguments to lower the expected unit test code coverage?
Thanks, Sven
I prefer to use integration tests with sonarqube. 80% is then a good threshold.