2 MINUTE READ | May 29, 2014
When Green Tests Fail
Test Driven Development has a what’s called a red (failing) green (passing) refactor cycle. You start with a failing testing, writing a basic behavior specification for some bit of code that doesn’t exist yet. Then you write the just enough code to make your test pass. Then you change the test to reflect new functionality that will be added to the code and refactor the code to make the test pass. Repeat ad infinitum.
Even if you don’t do test-first development, this red green refactor cycle is addictive. Instant feedback that your code works! My tests are green! Let’s ship it. Not so fast.
This is the concept of, as Kent Beck put it, test fidelity. If all tests pass and we deploy the code, sometimes stuff just isn’t going to work. It’s possible that this failure is a solvable issue: a difference between a development and production environment is solvable, for instance. Integration tests failing to cover the seams between components would be another easily solvable case.
Sometimes those failures are not so solvable. Sometimes they are things tests could not have caught. Maybe a user does something unexpected and breaks your application. Maybe the scale of datasets in a production database doesn’t agree with your algorithm that worked on 5,000 records.
This is why having someone else there to try and break your code is valuable. It’s why business oriented logging is valuable and it’s one of the main reasons QA people have gigs.
Every time any person or thing interacts with your code it’s the result is feedback on whether that code works or doesn’t. That’s a huge opportunity for any developer to get better.
It’s okay to have confidence in code’s quality and utility based on tests. But be sure to realize that sometimes green tests fail to give you the proper feedback. Seek it elsewhere.
Stay in touch
Subscribe to our newsletter
Photo by wetwebwork.
2 MINUTES READ | February 4, 2020