Why do we need test coverage reporting?
The point of coverage is as an aid for what should already be happening: risky logic is considered and proven from an engineering standpoint, before/as it’s being written. When you force yourself to consider how you would test a piece of logic, you automatically try to make it simpler, so your tests aren’t forced to become gargantuan monstrosities just to cover all of the preconditions.
How testing gets ignored
Traditional engineering uses specs and schematics out of necessity. You don’t want to dump tens of thousands of dollars into a prototype only to have to throw it away and start from scratch.
With software engineering, the cost is harder to see. Code is cheap. You can get up and running pretty fast, try things out, throw it all away. The problem is when the majority of the problems are solved problems, and this is still the only approach utilized. Or when the exploratory code is what gets pushed to prod because you can’t sell what you can’t ship, so why would you want to take extra time to write specifications, and code that reflects those specifications? Or the engineers are being pressured by the product owner and management to ship now, thinking it can be fixed in post-production releases.
Good code is at least as expensive as any blueprint or schematic design, but not as expensive as not one.
There’s nothing wrong with exploratory programming, but it’s easy to ignore the cost of building without specifications when you’re greenfielding a new project.
So to protect ourselves from being unable to refactor as thoroughly as needed because the prototype is what got pushed to prod, and now the interfaces are all being consumed and everything’s too tightly coupled to break apart, we write to specifications, in the form of unit tests.
A test is the engineer’s demonstration of what the engineer thinks the code is doing. Obviously, the closer to the reality of what the code is doing and the more thorough the tests, the better. For us mere mortals, it’s impossible to retain all of the thought processes and reasoning that went into coding something a certain way without documenting it somehow. And the best documentation is that which can empirically prove that what it says should happen actually happens.
Designing from the angle of unit testing results in more concise and readable logic
You’re chugging along writing code, and finally, your output matches the project requirements. Great! Ship it to QA.
Then the project requirements get added to, and you have to match these new requirements. So you chug along and write more code and the output matches the new requirements. Great! Ship it to QA.
Then QA sends a bug report indicating the first requirement is now broken. You have no unit tests telling you exactly where problems now exist, so you pore over the code, rebuilding (inaccurately) your version of the model of how the application works in your mind. Some indeterminate time later, you find the solution to make both requirements pass. Great! Ship it to QA.
QA is happy, and it rolls into production. Then you get hit by a bus. A big one. You no longer have to worry about unit tests, or anything. Great!
The company you no longer work (you got hit by a bus) for decides to add a new feature to your project. After spending weeks trying to decipher the pile of spaghetti you left behind, they decide to rip it out and start from scratch. All of your effort was in vain. And you got hit by a bus.
“But,” you say, “if I’ve been hit by a bus, how is this my problem?”
It’s not. But what if Employee #2341 wrote a pile of spaghetti, then got hit by a bus, and now it’s your job to add that new feature and you have to tell your boss it’ll cost them a boatload for you to upgrade it or a boatload for you to write it from scratch?
although I think the problem is more real when the you of today has to remember what the you of 3 months ago was thinking
COL shower thought: blog post using an interview transcript with the preface “names have been changed to protect identities and statements completely refabricated to better reinforce the arguments made in this post”
Fghj [11:41 AM] yea but dunning kreuger people think they know more than they actually do
Asdf [11:42 AM] I guess that’s true but we’ve also been jerked around by changing requirements and which one will be the one you remember?
Fghj [11:42 AM] yea good point
Asdf [11:43 AM] that’s the issue I’ve faced more than anything I totally remember a ton of stuff that should be better tested/documented but sometimes I remember the old way, to my detriment oh yeah, that requirement changed, and someone not me coded it
Fghj [11:46 AM] uploaded this image: that reminded me of something like this I saw in a book once
Keep things small, and only doing one thing
I prefer the delegate method pattern. If I need to do more than a single operation in a method, that method becomes a delegate method, calling the multiple operations as separate methods. This means when I need to test the delegate method, I just mock out the target methods and make sure they get called. That’s it. The target method tests are also simplified, since all I have to worry about is the single operation, usually against the data being passed in.
Compare that to a method that does 2 things. In order to get to the second thing, the first thing has to execute successfully, which means to test the whole method, preconditions for BOTH operations need to be set up. And unless you make your method test name generic enough, or both operations can be tested with the same set of preconditions, you’ll probably end up writing duplicated code to set up the preconditions for tests for non-happy paths.
No single coverage metric is more important than the rest
Because if you just look at class for coverage requirements, you can cover 1 line in each class and pass, and critical logic can be ignored altogether.
If not all branches/lines/methods/instructions are being considered for testing, that leaves a lot of room for error.
Accountability and assurance.
Coverage and unit tests combined gives us bargaining power. We can say, look, we have done our best due diligence in making sure the application is as good as we can make it from an engineering standpoint. Unit tests and coverage reports make it so you don’t have to blindly trust us that it’s done and ready to ship, then figure out who to blame when the integration fails. It allows the product owner to more easily identify where an issue is. It may be in the application, and adding a simple test case can prove that out.
Peace of mind while developing
Unit testing gives us confidence that we can detect the impacts of the changes we are making across the entire application, without having to hold the entire model of that application in our minds with each change we make.
If I update the logic in widgetA, but not widgetB, I want to know that widgetAggregator (bad name) still behaves as expected, and I also want to know if obscureWidgetOperationWeForgotAbout suddenly starts to fail. We can look at the preconditions and expected output and quickly see what went wrong.