Test all or Test small?
Recently a tester friend of mine discussed one of the classic problems of testing. You are about to release your next version and tested all areas on a “Release Candidate (RC)” build and then you find one issue that is big enough to ask for a new build. Now you get another RC build. Should you test it all as if this is the first RC build? Or should you start from where you left the last build? Or should you just test the fixed issue? Should you test all or test small?
As a hardcore tester, everyone is taught to be suspicious of things. We apply “guilty until proven innocent” formula to everything that we test. And given that we know from experience that in software world one change can upset many unwanted things, the first preference seems to be Test all.
But here is the dilemma. The whole set of tests take many days to complete and time is short. And all those people called Project Managers, Release Managers, Product Owners etc. etc. are asking you after like every hour “How are the testing results? Can we release now?”
Let’s see if we have a case for Test small i.e. only test what has changed. I believe yes and here are the things that you need to achieve this:
Know what has changed
The biggest challenge is that until the testing team has a very good insight into what is built in the system and how it is built, they’ll continue to consume all builds including RC as black boxes. See the figure below that if you receive two black boxed builds, you can’t tell what has changed. But if you can see inside those boxes, you can tell what has changed.
How can you look inside the system?
You need to have all the details of how your source control is maintained. Is it a trunk based development or branch release system? Access to changes made to the code so that you exactly know what happened between the two builds. Many build systems can generate a ‘diff’ report between two builds. Work with the programmers in your team to have those reports coming out with every build. There are tools that can compare two installed versions of an application and even you can write your own tools to know what has exactly changed. Take control of this system rather than waiting for the information to arrive from others.
Now that you are sure what has changed, your test strategy for the latest RC will be just around that change and not a “Test all” strategy.
Unit tests are the safety net
Yes, you know that right. If your programmers start writing Unit tests and you befriend them to support this activity as a testing unit, you can see if all tests that passed on last RC are passing on the new RC, things could be alright in those areas. The bigger is the scope of Unit testing effort, the safer are your releases.
History repeats itself
Keeping a track of what happened in the release cycle until the latest RC is always a good indicator of what will go wrong in this build. Say some of your modules have been working fine in all previous runs of testing and the change is not in that module, you can safely ignore such modules for the latest RC. However if some modules created problems here and there in the release cycle, adding them to “Test small” strategy might be interesting.
How do you tackle this problem? Do you do a Test all or you have discovered some form of Test small?