What you end up with is:
1. Tests that at least invoke code that may be overlooked, but due to mocks, invoke it in a way thats meaningless.
2. Twice as much code to update when doing any feature work, refactoring, ... or even bug fixing!
3. A false sense of security because "tests passed!"
4. A lot of wasted time writing tests for code you end up either throwing out or significantly refactoring before it ever becomes a remotely active part of the project.
I'm jaded.
I LOVE good tests, but far, far too often they get in the way of real progress that actually matters to an end user. Tests are supposed to be worth their weight in gold. Too often they're a self-congratulating pat-on-the-back and a detriment to success.
Please, prove me wrong. I know you can, because this does NOT apply to everyone.
The overall test coverage of the code I'm currently unit testing is 63%. This does not mean the remaining 37% will ever get tested, it means around 63% of the codebase is worth testing. What's more relevant is the individual namespaces. For example: Services is at 100% test coverage, because everything in Services actually does something worth testing. Models.DTO, that's more like 50%. I do not need to test every auto getter and auto setter.
I view test coverage like a log curve, more is better, but the returns can get very diminishing.
As far as the tool, working in C# I use JetBrains dotCover.
As others have mentioned test coverage is many times meaningless or misleading, can lead to abuse and don't contribute a lot to the overall quality of the product.
I use SonarQube/SonarCloud for the reports.