Sorry about the delay since my last post.
Now that I've freed some spare time, I hope to get back to the "Testing: Why Bother?" series. So stay tuned.
Anyhow, wanted to share a quick insight I had about what to do when you need quick feedback about a feature that you find hard to write a good test for, through an example.
I was pairing with my buddy on a cool feature of our project for some time now.
The feature is basically some kind of a "Device Prettifier", that receives a local device path, performs some inquiries about it, and when its __repr__ (toString() equivalent) is called, it displays all the data it gathered from the inquiries prettily.
So, we actually test-driven-developed the mentioned DevicePrettifier, unit-testing everything by creating mock devices returning whatever we want the DevicePrettifier to do. Basic TDD+Mocking, went very smoothly and was a lot of fun.
The Tricky Part
Unit testing here was not enough for us. We mocked what the inquiries returned according to known examples we know about, and we pretty much tailor-made the responses to the logic we wanted to implement. Perfectly reasonable way to implement a feature using TDD. However, we never tested how the DevicePrettifier behaves on a REAL device, on various operating systems and environments.
So... The correct way to go about this, is to write an integration test. An integration test that uses real devices and that tests how the DevicePrettifier works on them.
The problem: without the proper testing framework that will allow us to automatically allocate hosts and real devices to test, we would have to put a lot of effort on such integration test - effort that might take us too much time, as we need to release this feature on a tight deadline.
What Could We Do?
We could have made a manual test... But then we would lose the important regression capabilities we gain when making our tests (including integration tests) automatic.
Or, we could have done something a little bit in between: An integration test with no asserts.
Are You Crazy?
"No asserts?? What do you mean? What do you test when you do that? If a human needs to go over the results of the test run - it's not automatic, and it's of no use."
OK, I knew you guys would say that. But actually there are 2 things to notice here:
1. For the purpose of testing your code for the first time - it's better to write an automatic test with no asserts than to write a manual main(). This way you have the structure of the test that you can later add asserts to more easily. Also - when you run it for the first time and manually check the output, you can quickly find the problems, and write a more specific test for them - even go back to the unit tests and alter them for this purpose.
2. Even a test with no asserts automatically tests something. It tests that no exceptions are thrown. This is a VERY important thing to test for, and -- let me tell you -- it founds a decent amount of bugs!
So, here's the test we made (code almost untouched before uploading):
We simply logged on to a few hosts of different type, ran this test, and it found the most important bugs we had. May be surprising, but really cost-effective!
Now, we can add this test to all of our continuous integration slaves, and if for instance a new type of device is suddenly connected to one of the slaves -- we'll immediately know if out prettifier couldn't parse the inquiry responses it returned. Coolness!
The next time you feel like writing a complicated integration test and give up because you just don't have time, think about the "assert free" approach. It might be easier, save you a lot of time, and still find most of the bugs!
Stay tuned for more posts on the "Testing: Why Bother?" series, and follow me on twitter :).