So, after talking about the famous too-much-time excuse, I am going to address the next excuse in the Testing: Why Bother? series: We don't write tests because it's too hard!
I will try to make this post a little more practical than the previous one, but because it's still aimed more to the why and not the how, I'm not going to dive too much into the "how to write tests for beginners" techniques. If you feel you need some prior background for this, I can recommend the following great books:
So... I guess in your pretty world testing is a piece of cake...
Once again, you guess wrong my friends...
Testing isn't easy at all. Writing tests is a delicate art, and can be a very difficult task, especially for beginners with no experience in it.
However, if you consider the alternative, you'll eventually figure out that writing code that works without bugs, without a single line of test code, is even harder!
One might say: "Well... Still.. It's not cost-effective! You go through all the hard work to discover a few bugs you would find out about later anyhow?"
I have 2 answers to this claim:
1. Later? How later?
If you have read my previous post, you probably remember that finding out about a bug "later" can be actually really later, like when it's already at the customer's hands. In this case it can cost you WAY more. So cost-effectiveness isn't really a valid excuse here.
2. It's not THAT hard!
After you practice a little, read some books, become more comfortable with writing the tests, you'll see it's not that bad, and it will become a natural skill of yours.
However, there still might be several cases that you really think "this is a piece of code I simply CAN'T test". In this case - you've come to the right place! I'll try to tackle down the common difficulties you might be having.
The Common Difficulties
Here's a list of some common difficulties taken from my personal experience and of the anti-testers around me:
- The code has too much dependencies: "I need to instantiate 50 different classes to write a measly test case."
- There's a lot of environment to initialize: "Before I can invoke this method, I need to acquire a DB connection, and run 4 different processes..."
- The code is behaving in an unpredictable way: "I have 20 threads running here, I should expect a different result in each run!"
- The code involves a lot of manual operations: "Most of this logic is invoked when a button is pressed in the GUI, I can't automate a test for it!"
Tightly coupled code is evil! Regardless of testing!
This is true for the first 2 bullets. If your UltraUberDoesEverythingManager is really hard to test because on its constructor it receives 20 different classes, or worse: it instantiates 20 different classes by itself, that's a real code smell.
Remember, if it's difficult to test, it is difficult to maintain and use!
If you find it hard to infer the class from its context and write a good unit test for it, having to pass real instances, having to initialize a real DB connection or insert the USB dongle for the test to run, it means that later when you'll want to extract the useful piece of algorithm or reuse the objects, you are going to have the exact same difficulties. That's just bad OOP.
Decouple! Use interfaces so you can replace them with mocks.
Consider the following code:
This code will be very hard to test. We simply want to test that serializeString() stores the string in DB using the dump() method, but in order to do so, we have to start a connection to DB using the horrible ConnectionManager singleton.
Instead, consider the following:
This way, we pass an implementation of ourselves to the Serializer interface. We can use one of the many mocking frameworks (like Mockito for Java, or Michael Foord's Mock for Python), and easily test that serializeString() called our mock's dump() method.
Notice that this was an example of taking existing code and making it more testable by decoupling the classes and introducing an interface in the middle. If we had tested our code first, whether by applying full TDD techniques or simply a test-first approach, we would have already seen this design defect earlier and it would have saved us some time...
But I'm not saying you MUST use TDD in order to test your code. Even without it, and without tests whatsoever, your code will be better if it's testable.
Preparing for the unpredictable
Well, If you remember the list of excuses, you might remember the "It's hard to tell how the code behaves each time, so I can't write a test for it" excuse. We are going to talk more about this subject there.
But for now, I'll focus on a few useful tips about multithreaded testing:
- Add sleeps + polling to make your assertions more predictable. You can use the following delayedAssert idiom:
- Multithreaded testing is possible! I guarantee that every deadlock and race-condition can eventually be isolated and tested with the proper sleeping and locking.
- Use mocks instead of some of the threads that don't belong to the scenario you're testing.
- Don't forget to have some kind of ExceptionHandler that catches exceptions thrown on other threads. You don't want one of your thread to crash and your test to keep going without noticing it.
- Search for frameworks that may help you. I couldn't find any I really liked, but there are a few of them out there.
Well, that's a tough one, I must admit. I must also say I'm not an expert of testing GUI. However, as tough as it can be, there are a few points to remember:
- Think about whether you REALLY need to test the UI? Or should it be enough to use decoupling so that the UI part is trivial and you already tested all the logic behind it?
- Write your own in-house testing framework. For example, if your part of your system combines VUI (voice user interface) - write a framework that injects WAV files to your system, etc. It might be a lot of work, but eventually it can really pay off, and free some QA engineer's time for some harder-to-find bugs.
Remember: High level automation is better than no automation!
Unit tests are the most effective way to test your code: They are fast to run, if they fail you know exactly the place in the code that caused it, and they make your design better.
However, sometimes unit tests are not suitable for what you are testing.
That doesn't mean you should give up on an automated regression suite! Write tests in a higher level!
For example, you are trying to test the behavior of a windows networking related driver in the case a network cable disconnected. Unit testing this can be practically impossible in conventional matters. Instead, you should write some kind of script that turns off the relevant port of the networking switch, and tests the behavior is still OK. This might not be perfect, but it's better than no automation for the bug at all.
Are you finally finished with your testing nonsense now?
Sorry, you'll have to put up some more excuses :) The next one is going to be : "It's QA's job, and they're going to test the code anyhow".
If for some reason you decided that after these last posts you still want to read the next ones, you might want to subscribe to this blog and follow me on twitter.