What causes what?
Recently, I and few of my colleagues were co-editing a Word document and one of us was assigned as the CTO (Chief Typing Officer). To surprise of all of us and the CTO, the Copy/Paste was not bringing in all the formatting associated with. All of us gave many suggestion from our own experiences of using Word but none of it worked and we did that editing without the formatting. The CTO (being the CTO) told us next day that it was caused by the Skype Add-in; none of us believed it but it is true. Though the newer versions seem to work fine now.
That caused me to think about the debate of ‘test only the affected area’. By this I mean that being on Agile projects, we get little time to test all features and we tend to do a ‘cause and effect analysis’ where we view changes from the code push messages or build logs and then decide which areas we need to test for a particular build.
That also reminded me a famous quote of a very good friend of mine in testing profession who says:
“Software are funny. In automobile world if you change the wheel of your vehicle it cannot malfunction your engine. In software world, you can’t say it with guarantee.”
(the original photo is here: http://www.telegraph.co.uk/culture/tvandradio/5164220/How-many-paramedics-does-it-take-to-help-the-BBC-change-a-wheel.html )
Usually I won’t agree and say that there has to be some connection if a changed wheel is messing up with engine and if we as tester can know the architecture of the stuff and have some grey-box view of the world, we can figure out what to test. I would think running all tests for a single change as a waste of time and our discussion would continue.
Now what to do with examples like Skype Add-in? Though I still believe there must have been some connection and the testing team can find out which sub-set of Word functionality should be run for the Add-in. Or we suggest them to test the whole MS Word before they ship?
What are your views on how much testing should be performed for a change?