Test all or Test small?

Recently a tester friend of mine discussed one of the classic problems of testing. You are about to release your next version and tested all areas on a “Release Candidate (RC)” build and then you find one issue that is big enough to ask for a new build. Now you get another RC build. Should you test it all as if this is the first RC build? Or should you start from where you left the last build? Or should you just test the fixed issue? Should you test all or test small?

As a hardcore tester, everyone is taught to be suspicious of things. We apply “guilty until proven innocent” formula to everything that we test. And given that we know from experience that in software world one change can upset many unwanted things, the first preference seems to be Test all.

But here is the dilemma. The whole set of tests take many days to complete and time is short. And all those people called Project Managers, Release Managers, Product Owners etc. etc. are asking you after like every hour “How are the testing results? Can we release now?”

Let’s see if we have a case for Test small i.e. only test what has changed. I believe yes and here are the things that you need to achieve this:

Know what has changed

The biggest challenge is that until the testing team has a very good insight into what is built in the system and how it is built, they’ll continue to consume all builds including RC as black boxes. See the figure below that if you receive two black boxed builds, you can’t tell what has changed. But if you can see inside those boxes, you can tell what has changed.



How can you look inside the system?

You need to have all the details of how your source control is maintained. Is it a trunk based development or branch release system? Access to changes made to the code so that you exactly know what happened between the two builds. Many build systems can generate a ‘diff’ report between two builds. Work with the programmers in your team to have those reports coming out with every build. There are tools that can compare two installed versions of an application and even you can write your own tools to know what has exactly changed. Take control of this system rather than waiting for the information to arrive from others.

Now that you are sure what has changed, your test strategy for the latest RC will be just around that change and not a “Test all” strategy.

Unit tests are the safety net

Yes, you know that right. If your programmers start writing Unit tests and you befriend them to support this activity as a testing unit, you can see if all tests that passed on last RC are passing on the new RC, things could be alright in those areas. The bigger is the scope of Unit testing effort, the safer are your releases.

History repeats itself

Keeping a track of what happened in the release cycle until the latest RC is always a good indicator of what will go wrong in this build. Say some of your modules have been working fine in all previous runs of testing and the change is not in that module, you can safely ignore such modules for the latest RC. However if some modules created problems here and there in the release cycle, adding them to “Test small” strategy might be interesting.

How do you tackle this problem? Do you do a Test all or you have discovered some form of Test small?


Tags: , ,

6 responses to “Test all or Test small?”

  1. Jagdish says :

    Indeed very relevant question that’s encountered frequently. And you have suggested reasonable approach, shall help testers. however we know very few programmers write unit tests so we have our own module specific high level manual tests (say module specific smoke test) those are quicker to run covers most of common module functionalities (80/20 rule).

    Liked by 1 person

    • zafar says :

      Very good article.From my experience I would say test what is the requirement to test, unless you understand the system and make any changes which can effect other part of the system, then you have to test all because you don’t know the implications of change on other parts of the system. Looking at the bigger picture, its fine that more resource time will be used and extra cost, but is it a bigger cost then loosing a major client? due to some untested part of the system which can cause issue for the client? and cost them millions?


      • majd says :

        Thanks Zafar for your comment. I can see your point but be sure that you just don’t blindly apply “test all” strategy. You can expand “test small” strategy based upon your judgement of what could go wrong as I also mentioned in History repeats section. Another note is that if “test small” becomes so big that it is essentially a “test all”, we can lose the purpose of being quick and efficient as a testing unit.


    • majd says :

      Nice thought Jagdish that if Unit Testing is missing, we should have a quick round of smoke tests that can yield results. Just a caution that doing this shouldn’t stop pushing Programmers to write Unit tests.


Trackbacks / Pingbacks

  1. Reading Recommendations # 20 | Adventures in QA - June 10, 2015
  2. Only test changes | Knowledge Tester - February 13, 2018

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.