HACKER Q&A
📣 sveri

What is a good percentage of Unit and Integration test coverage?


Basically the title. To give a little bit more context, our team recently introduced sonarqube and while I was OOO they decided they wanted a unit test coverage of at least 80% for new code.

Now, we are working a product that consistes of two UIs (eclipse and Web) and a java runtime. It's been developed for more than ten years and we have around 45000 tests in total, which combine UI, system, integration and unit tests. Out of all of them, 3200 are unit tests, so less than 10%.

So over time we adapted a style of testing a lot of stuff with integration and system tests, which makes sense, as we are heavily integrated into several systems around us (two application servers, eclipse, multiple messaging broker, different database backends, ...).

I think it is totally unrealistic to have 80% code coverage for new unit tests and it showed already. Two members started writing tests that covered a lot, but still were green when I went ahead and deleted production code from that covered code. Which means, a lot of stuff was mocked.

Now, I'd like to know three things. 1. Your experiences in a similar area? 2. Is there some scientific background to code coverage and it's outcome? 3. What are some good arguments to lower the expected unit test code coverage?

Thanks, Sven


  👤 speedgoose Accepted Answer ✓
Personally I don't see the point of doing unit testing to do unit testing when you actually need integration testing. Mocking external services is a waste of time in my humble opinion. It's super time consuming and you can mock it wrongly or have a different behaviour in you mock.

I prefer to use integration tests with sonarqube. 80% is then a good threshold.


👤 jjgreen
Depends heavily on the codebase I guess. The last two places I've worked (startups in London) have been very TDD and we get 95%+ coverage on new code. This mostly Ruby so we have the ease of RSpec of course.