The Downward Spiral of Test-Later Automated Testing
I’ve seen this pattern a number of times in my years as a software developer, and it’s always hard to watch:
(1) “We don’t need automated tests on this stuff. It’s way too simple.”
( The codebase grows until it’s more and more out of control. We’re getting scared to make changes and manual testing is taking forever. )
(2) “It turns out that we need tests, but this stuff isn’t very testable, so the most focused tests we can write are integration or e2e tests.”
( Test suite starts off well, but quickly becomes very slow and very flakey. When tests fail, it takes a lot of debugging to figure out why. They also require all kinds of complicated setup making them too hard to write.)
(3) “Automated testing is really expensive compared to the benefit! We’re probably not going to bother to write tests for simple things.”
It’s obvious why this pattern keeps occurring: Every next step makes sense given the current state.
It’s hard to break out of the cycle too, because it’s self-reinforcing. Working around the damage seems to create more damage. The things that seem like obvious solutions from an “in the spiral” point-of-view only make the problem worse.
Undoing the damage takes a lot of expertise and investment. But most importantly it’s expensive as hell the longer it goes on. You waste time:
- avoiding necessary changes/improvements that are too scary
- writing hard-to-write tests
- debugging huge sweeping e2e tests
- maintaining overly complex code (“You shouldn’t have to change the code just to test it!”)
- doing more manual testing
- rerunning flakey tests
- investigating more obscure testing tools and strategies
- waiting for test runs to finish
- waiting for releases to be approved by humans
- dealing with defects in areas “that aren’t worth the effort to test” or are “too simple to test”
Over the long run, cutting quality measures almost always costs you more than you gain.