Archive | Software Testing RSS for this section

On Demand Testing

In an earlier post, it was explained that how a DevOps testing consists of three main types: Scheduled Testing, On-Demand Testing and Triggered Testing. I covered how we scheduled different types of testing in the same article and now let me dig into details of On Demand Testing.

Keeping the notion that “testing is an activity” that is performed more often than a “phase” towards the end, it is necessary to configure Continuous Testing. That requires to setup automated testing jobs in a way that they can be called when needed. This is what we call On Demand Testing. This is contrary to common notion of having testers ready to do testing “On Demand” as features get ready to test.

(the original photo is here)

For example, we have a set of tests that we call “Data Conversion tests” as they convert data from our old file format to new file format. As you can imagine, lot of defects uncovered by such tests are data dependent and a typical bug looks like “Data conversion fails for this particular file due to this type of data”. Now as a Developer takes upon that defect and fixes this particular case, she’d like to be sure that this change hasn’t affected the other dataset in the conversion suite of tests. The conversion testing job is setup in a way that a Developer can run it at any given time.

I shared that we are using Jenkins for the scheduled testing. So one way for an individual team member to run any of the jobs on-demand is to push set of changes, log into the Jenkins system and start a build of automated testing job say Conversion Testing for the above example. This is good, but this might be too late as changes have been pushed. Secondly the Jenkins VMs are setup in a separate location than where the Developer is sitting and feedback time can be anything between 2-3 hours.

Remember, tightening the feedback loop is a main goal of DevOps. The quicker we can let a Developer know that the changes work (or don’t work), the better we are positioned to release more often with confidence.

So in this case, we exploited our existing build scripts which are written in Python but are basically a set of XML files that define parts that can be executed by any team member. We added a shortened dataset that has enough diversity and yet is small, which runs within 15-20 minutes. Then we added a part in the XML file that can do this conversion job on any box at any given time by anyone in the team.

Coming back to the same Developer fixing a Conversion defect, the Developer after fixing the bug now can run the above part on her system. Within half an hour, she’ll have results and if they look good, she’d push changes with a confidence that next round of Scheduled Testing with larger dataset would also pass.

Please note that we have made most of our testing jobs as On-Demand but we are having hard time at few. One of them being Performance Testing because that is done on a dedicated machine in a controlled environment for consistency of results. Let’s see what we can do about it.

Do you have automated testing that can be executed On-Demand? How did you implement it?


Pakistan Software Quality Conference 2018

Doing it for the first time is very hard. But doing it again and again with same energy and passion is even harder. That’s why as we witnessed a very successful Pakistan Software Quality Conference (PSQC’18) on April 7th, we felt even more accomplished than the first edition PSQC’17.

Last year it was in Islamabad, and this year the biggest IT event in terms of Quality Professionals attendance moved to the culture capital of Pakistan: Lahore. Beautifully decorated in national color theme, the main auditorium of FAST NU Campus witnessed 200+ amazing people join us from various cities across Pakistan.

After recitation of the Holy Quran, event host Sumara Farooq welcomed the audience and invited PSTB President Dr. Muhammad Zohaib Iqbal for the Opening note. Dr. Zohaib recapped the journey of community building event and shared demographics of the audience. He emphasized the need for everyone to act upon to the Conference theme “Adapting to Change”.

We had two quick Key Note sessions in the first half. The first one being on “Software Security Transformations” by Nahil Mahmood (CEO Delta Tech) who spoke about the grave realities of current software security situation in Pakistan. He then urged to take a part in making every software secure enough in accordance with various Industry standards. Read more about those in the PDF: PSQC18_NahilMahmood_SoftwareSecurityTransformation.

The second talk was on “Quality Engineering in Industry 4.0” by Dr. Muhammad Uzair Khan (Managing Director Quest Lab) who first explained the notion of Industry 4.0. He envisioned a future where systems are being tested in more and more automated way with Exploratory manual testing going in background. He also rightly cautioned that any prediction to the future is a tricky business.

Few honorable guests then spoke at the event including Professor Italo, Honorary Consulate of Purtagal Feroz Iftikhar and HOD of FAST NU CS Department. Shields were then presented to Sponsors of the event by Dr. Zohaib and myself. Contour Software was represented by Moin Uddin Sohail and Stewart Pakistan (formerly CTO24/7) by their HR Head Afsheen Iftikhar.

A tea break was now needed to refresh participants for the more technical stuff which was coming their way. This time was well utilized by all to meet strangers who quickly became friends.

(More photos covering the event are coming soon at our facebook page)

Second session had five back to back talks:

  • “Performance Testing… sailing on unknown waters” by Qaiser Munir, Performance Test Manager from Stewart in which after giving some definitions, he shared a case study on how a specific client felt happy with insights it needed from Performance Testing. Full slides here: PSQC18_QaiserMuneer_PerformanceTesting
  • “Agile Test Manager – A shift in perspective” by Ahmad Bilal Khalid, Test Manager from 10Pearls who travelled from Karachi for the event. ABK, as he likes to be called, recalled his own transformation from a traditional Test Manager to Test Coach who is more of an enabler. His theme of experienced Testers becoming Dinosaurs and not helping new ones learn new stuff did hit well and resulted in quite a fruitful discussion. Read more here: PSQC18_AhmadBilalKhalid_TestManager-ChangingTimes
  • “Agile Regression Testing” by Saba Tauqir, Regression Team Lead from Vroozi shared her current work experience where they have a dedicated team for Regression Testing. This also sparked a debate within the audience so as how much Regression Testing can be sustained in Agile environments. See her talk here: PSQC18_SabaTquqir_RegressionTesting
  • “To be Technical or not, that is the question” by Ali Khalid, Automation Lead at Contour Software which perhaps was the star talk of the day. He took upon story of a hypothetical tester “Asim” and how he became a Technical Tester through four lessons. Easing up learning with some funny clips and GIFs, Ali gripped the audience to convey the message strongly which included creating an attitude towards designing algorithms and enjoying solving problems. Full slides here: PSQC18_AliKhalid_ToBeTechnical
  • “Power of Cucumber” by Salman Saeed, Automation Lead from Digitify Pakistan who talked about his journey towards automation through the same tool. He explained different features of it, the Gherkin language, the tools needed to run it and shared a piece of code that showed a sample Google search test case. He urged all to use powerful tools like Cucumber to begin their automation journeys. He also promised to share code to whoever contacts him, so feel free to bug him. His slides are here: PSQC18_SalmanSaeed_PowerofCucumber

A delicious lunch was waiting in the Cafeteria which was basically an excuse to learn from each other while enjoying the food. I could see many people catching up with Speakers to ask their follow up questions and some healthy conversations around it.

Audience was welcome back by three more talks in the afternoon session:

  • “Distributed Teams” by Farah Gul, SQA Analyst at Contour Software, another speaker from Karachi. She first explained how different location and time zones create the challenge of working together as a team. She shared some real examples on how marketing campaigns failed in a foreign country due to language barriers. At the end she suggested some ways to curb these challenges which included understanding culture, spending more time in face to face communication and asking for clarity. Slides are here: PSQC18_FarahGul_DistributedTeams
  • “Backend Testing using Mocha” by Amir Shahzad, Software QA Manager at Stella Technology who started off his talks with the ever rising need of testing the backends. He explained how RESTful APIs can be tested using Mocha with some sample code. He also mentioned other libraries that can be used to do better assertions and publishing HTML reports. His talk is here: PSQC18_AmirShahzad_Mocha
  • “ETL Testing” by Arsala Dilshad, Senior SQA Engineer at Solitan Technologies who shared her first-hand experience of testing ETL solutions. After providing an overview of her company’s processes, she told how data quality, system integration  and other testing are needed to provide a Quality solution. Read more details here: PSQC18_ArsalaDilshad_ETL Testing

Then came the best part of the day. We experimented with a new segment called “My 2 cents in 2 minutes” which provide participants to come onto stage and share any challenge they are facing in their profession. Inspired by 99 seconds sessions at TestBash, this proved to be a marvelous way to engage the audience. Around 20 awesome thoughts were presented by Quality Professionals who talked on a variety of topics. I do plan to write some follow up posts on some of the stuff that was brought there as it would be unjust to sum it up here in few lines.

Another tea break was needed to defeat the afternoon nap and seeing some Samosas (and other eateries) being served with tea resulted in many happy faces.

We were then back for the final and perhaps the best talk of the day. “Melting pot of Emotional and Behavioral Intelligence“ by Muhammad Bilal Anjum Practice Head QA & Testing from Teradata who has more than a decade experience in Analytics. Bilal gave some examples on how the current situation of a potential customer can be predicted from the available data. For example, Telco data combined with Healthcare and other sources can be used to predict how likely a person will buy some health solution. He then explained how culture plays a key role in human behaviors and why Industry Consultants are in demand for jobs like above. At the end he threw some ideas on how such solutions can be tested.

With all talks finished, I was asked to close the day being the General Chair of PSQC’18. I took upon this opportunity to thank sponsors, partners, organizers (Qamber Rizvi, Salman Saeed, Adeel Shoukat, Ali Khalid, Mubashir Rashid, Muhammad Ahsan, Amir Shahzad, Ayesha Waseem, Salman Sherin , Ovyce Ashraph), speakers and audience and made a point to mention that how collaboration can produce results which can never be surpassed by individuals. To spark some motivation for participants to try out wonderful ideas presented in the day, I used Munnu Bhai’s famous Punjabi poem with punch line “Ho naheen jaanda, karna painda aye” (It never happens automatically, you have to do it)

Long live Pakistan Software Quality community and we’ll be back with more events through the year and yes PSQC’19 in the Spring of next year!


Participate in “State of the Testing Survey 2018”

The only way to improve yourself and your craft is to reflect on the sate of affairs. The #StateOfTesting Survey gives us exactly that opportunity where testers from across the globe give their valuable feedback and then gain value from this collective wisdom.

I have been taking part in the survey for sometime and I see that not much testers from Pakistan are doing that. Given that our community is on the rise and we had our first ever testing conference, it’s time to get in touch with testing community of the world!

The link to the survey is:

The survey is brought to us by PractiTest and TeaTimeWithTesters magazine. You can find results of previous survey here.

Thanks for the help and let’s make (testing) world better than today!


Code Coverage Dos and Don’ts

Code Coverage is a good indicator of how well your tests are covering your code base. It is done through tools through which you can run your existing set of tests and then get reports on code coverage with file level, statement or line level, class and function level and branch level details.

Since the code coverage report gives you a number, the game of numbers kicks in just like any other number game. If you set hard targets, people would like to get it, and at times a number means nothing. Here are my opinions based upon experience on how to best use Code Coverage in your project:

Do run code coverage every now and then to guide new unit test development

Think of code coverage as a wind indicator flag on your unit testing boat. It can guide you where to maneuver your boat based upon the results. As Martin Fowler notes in this piece:

It’s worth running coverage tools every so often and looking at these bits of untested code. Do they worry you that they aren’t being tested?

The question is not to get the list of untested code base; the question is whether we should write tests for that untested code base?

(image source)

In our project, we measure code coverage for functions and spit out list of functions that are not tested. The testing team then do not order the developers to write tests against them. The testing team simply suggests to write tests and the owner of that code prioritize that task based upon how critical is that piece and how often that is requested by the Users.

Dorothy Graham suggests in this excellent talk that coverage can be either like “butter” or like “strawberry jam” on your bread. You decide if you want “butter’ like coverage i.e. cover all areas or you want “strawberry jam” coverage i.e. cover some areas more in depth.

Do not set a target of 100% code coverage

Setting up a coverage goal is in itself disputed and is often misused as Brian Marick notes in this paper which has been foundation of any Code Coverage work thereafter. Also anything that claims 100% is suspicious e.g. consider following statements:

  • We can’t ship unless we have 100% code coverage
  • We want 100% reported defects to be addressed in this release
  • We want 100% tests to be executed in each build.

You can easily see that a 100% code coverage gives you “Test all Fallacy” to imply that we can test it all. Brian suggests in the same paper that 90 or 95% coverage is good enough.

We have set a target of 90% function coverage but it is not mandatory for release. We provide this information on the table along with other testing data like results of tests, occurrence of bugs per area etc. and then leave the decision to ship on the person who is responsible. Remember, the job of testing is to provide information not make release decisions.

Yes, there is no simple answer to how much code coverage we need. Read this for amusement and know why we get different answers to this question.

Do some analysis on the code coverage numbers

As numbers can mean different things to different people, so we need to ask stakeholders why they need code coverage numbers and what they mean when they want to be covered.

We asked this question, got the answer which is to do a test heat analysis on our code coverage numbers. It gives us following information:

  • Which pieces are hard to be automated? Or easy to be automated
  • Which pieces are to be tested next? (as stated in first Do)
  • Which pieces need more manual testing?
  • How much effort is needed for Unit testing?
  • ….

Do use tools

There are language and technology specific tools. For our C++ API, we have successfully used Coverage Validator  (licensed but very rightly priced) and OpenCppCoverage (free tool) that extract info by executing GoogleTest tests.

Do not assume coverage as tested well

You can easily write a test to cover each function or each statement, without testing it well or even without testing it at all.

Along with our function wise code coverage that I mentioned above, we have a strong code review policy which includes reviewing the test code. Also we write many scenario level tests that do not add to the coverage but cover the workflows (or the orders in which functions will be called) which are more important to our users.

Brian summarizes it nicely in the before mentioned paper:

I wouldn’t have written four coverage tools if I didn’t think they’re helpful. But they’re only helpful if they’re used to enhance thought, not replace it.

How you have used Code Coverage in your projects? What Dos and Don’ts you’d like to share?

6th Islamabad Testers Meetup

If there were a time to test the passion of Software Testing community in Pakistan, it was last Saturday when resilient members attended the 6th Islamabad Testers Meetup in big number despite all the traffic challenges. The event hosted by MTBC at their campus in collaboration with PSTB saw professionals of various backgrounds who discussed and learned lot of new ideas on the day.

The event was a much awaited one as after the successful PSQC’17, activities restarted to gear towards PSQC’18. Umer Saleem welcomed everyone in the morning as Host of the day, introduced the event agenda and handed over the stage for the first talk.

Kalim Ahmed Riaz, Senior SQA Engineer from Global Share presented his thoughts on “Ways to Improve Software Quality”. Kalim who maintains his own blog on testing, quoted several everyday examples to explain his views including buying vegetables and the knowledge needed to define requirements. He then suggested multiple ways to cure common Quality diseases including “treating Testers as Clients”. By which he meant that not only Testers need to step up to think and act like real Clients but also the management need to acknowledge issues reported by Testing team to be real one and not postponing them sighting they are not real users. His slides are here: IsbTestersMeetup_Ways_improve_Quality_Kalim

The next presentation was well handled by a Story-teller Farrukh Sheikh who is Lead QA Engineer at Ciklum. Farrukh shared his thoughts on “Adaptation” skill which he believes is a must for all testers. He emphasized everyone to get out of their “deceptive illusion” where they believe themselves as strong and well-built testers and rather learn the new skills of the trade. He suggested to read a lot, practice new methodologies, learn coding as few ways to build muscles for tackling the challenges of tomorrow. Full slide deck: IsbTestersMeetup_Adaptation_Farrukh

(more photos are here)

After the talks, we did an experiment of “Open House Discussion” which was very well received. I was handed over the task to moderate it around the selected topic (based upon registered participant’s feedback) “Challenges in Implementing Test automation”. The format was that anyone from audience would share a challenge he or she is facing, and then everyone in the audience was free to give input on how to address that challenge. The discussion was around following challenges shared by members present there:

  • Not finding time to do automation
  • How to stabilize automation when everything is changing?
  • Automation falling at configurations or keeping them up-to-date
  • What’s next if first round automation is done

There were many suggestions offered from experienced and uniquely qualified testers in the hall which ranged from “talking to management to buy time”, “involving Programmers in the team to be part of the automation project”, “including automation tasks in Sprint planning”, “keeping an open eye on the changes within and outside your organization”, “being selective in what to automate” and so on. The discussion was of much worth and I plan to write a full blog post on these topics. It was heart-warming to see that we as a community are now moving forward to discuss such challenges and hopefully next time we will have solved these problems and ready for next set of problems.

Dr. Zohaib Iqbal, President PSTB was then invited to give updates from PSTB. He shared the lessons learned so far in holding such events and highlighted various programs including Certifications around CTFL, ISTBQ partner program and Accredited training program. He invited all to be part of upcoming PSQC’18 to be happening at Lahore in Spring next year.

The stage was then handed over to Adeel Sarwar, CTO MTBC for a closing note. Adeel briefly talked about MTBC vision including their wonderful initiatives to incorporate fresh college graduates into their workforce and how this model has worked really well for them. He raised his concern of people in IT not reading enough which reminded me of my campaign of “Certified Book Reading Tester”. He made it clear to everyone that we need to raise the bar to improve quality of Software Quality professionals. He thanked the audience and shared some ideas on future collaborations in which MTBC can be a part.

Shields were presented to speakers and certificates were given to the wonderful MTBC organization team who were well applauded by round of applauses. We then moved to outside for an open air tea with snacks setting. The food was as delicious as were the discussions and lot of new ideas were shared by participants. New friends were made and old friendships were revived. The conversations were never ending but with every beautiful meeting, it ended too soon for all of us.

Thanks again to the hosts, PSTB support and above all the “cheetah” testers who always respond to such events. Together, let’s make Pakistan testing community even stronger!

3 illusions of Software Testing

Jerry Weinberg is my favorite author on the subject of Software Testing and I have posted about lessons from his Errors book, shared my thoughts on some types of Bugging. Lately I was very happy to finish yet another book by him called “Perfect Software and other illusions about Software Testing” which was on my reading list for log due to excellent reviews.

Now having spent almost 15 years on Software Testing domain and overall 20 years in the pursuit of Software Quality in general, my first impression was that the illusions Jerry will talk about will be one of those that I’m familiar with and/or have dealt with. But Jerry surprised me with his wealth of knowledge and interesting examples from his illustrious journey as a Software professional. I learned lot of lessons and sharing the top 3 with you.

  1. Pinpointing problems is not Testing

Some activities that I always thought to be Testing’s responsibility are actually in a grey area where these have to be shared between Testers and Programmers. Pinpointing a problem, after it is found, is one of them.

In one of the early team that I worked with required that when Testing reports a bug in the Tracking System, they should mention the first major version it was introduced. For example, if a Tester finds a bug in v 3.4, Tester would check if it existed in v 2.0 and v1.0 (if the answer to 2.0 is no). That is an activity that Jerry calls Pinpointing and suggests to differentiate it from Testing and Debugging.

Even in my current project, because we (the Testers) have the code and are able to debug an issue, our Data conversion issues are reported after pinpointing and debugging. Hence a Tester’s considerable time goes into locating the problem code after finding the problem. This is included in Testing whereas it needs some separate attention.

In Jerry’ words as he summarize this as a Common Mistake and suggests a solution:

Demanding that Testers locate every fault: This is totally a Developer’s job, because developers have the needed skills. Testers generally don’t have these skills, though at times, they may have useful hints.

  1. Providing Information is hard

Jerry talks in much detail Tester’s role as information provider. He suggests to use Satir interaction Model:

-> intake -> meaning -> significance -> response ->

There are separate chapters dedicated on each of the above and I’d encourage any Tester who is serious about their job to understand this model through reading the book. Different techniques have been discussed to improve information intake, how different people will mean different things from same observation, how to know which information is significant and finally how to make the best response accordingly.

  1. Product reviews are actually testing

In fact Jerry calls it “Testing without Machinery” and suggests that technical product reviews are actually a way to provide information and thus is Testing. He also lists some “instant Reviews” which is that if you try to review and you hear excuses like below, you know that something is wrong:

  • It’s too early to review it.
  • It’s too small to bother with. What could possibly go wrong?
  • We’ll let you know as soon as we are ready.
  • We [tested][tried][demonstrated] it and it works okay.
  • ….

It’s a treasure of information on those few pages which you can get instantly once you ask for a review. This has inspired me to write more about Peer Code Reviews so you should expect an article soon.

Let me end this with Jerry’s words:

If humans were prefect thinkers, we wouldn’t need to test our work. … But we are imperfect, irrational, value-driven, diverse human beings. Therefore, we test, and we test our testing.

Software Security workshop happened in Islamabad

Saima Karim (aka Mountain Girl) covered the “Insights into Software Security Workshop” which was conceived and conducted by Amir Shahzd.

“In Pakistan, testers meet up happen rarely and if it happens, it must be appreciated. Stella Technology organized workshop named as ‘Insight into Software Security’ last Saturday i.e. Sep 23, 2017 in their Islamabad office. Main objective of workshop was to provide testers community a space to share knowledge about Software Security.

Continue reading at original post here: