Challenge with thousands of Tests
Having no tests means you have a challenge with the Quality of your solution. Having thousands of tests means you have challenges of test management.
Yes, like testers and development teams, tests have to be managed.
So often we find claims on different tools’ websites that “we have thousands of tests running all the time to ensure quality” but let me today take you inside the life of that world.
As I talked about this problem of putting some tests to “Ignored list” to make the build passing all the time in my post about Continuous Integration, when tests reach a big number the number of Ignored tests also increase and a campaign has to be launched regularly to clear them. The tests in this list come due to three reasons:
- Tests are failing because test code is bad
- Tests are failing because production code is bad
- Tests are fine; they just take too long or have other dependency (data/hardware) that prevent them to be run on build servers.
For the bad test code, a public shaming (a.k.a. peer pressure) email that list down authors with biggest number of ignored tests help. Keep on sending those emails as the some people are more “shame-proof” than others.
For the bad production code, make sure the issue is reported in bug tracking system and elevate it’s status. Do mention to the team that if we fix them, we get two benefits: a) Bug count reduces and b) test count increases.
For the third category, some strategy is needed to run those tests after the build on regular basis. For example, we ignore all Performance tests from the build as they take too long. Then we run them each week and compare them with benchmarks to see if results are good. Bad results are then shared with the team.
Tests that you think run but they don’t
The machinery to include all tests in your repository is a piece of code and it can have bugs too. When you first write and commit your tests, you assume that they run each time yet they don’t.
In one of the audits that I did on my project recently revealed that about 10% of the tests were “not in the system”. The build files had some commented portion that happened due to some reason and then stayed there. There were some assumptions about the folder and files which were not being honored by all tests after some changes. Bringing them back in the system gave us some additional coverage right away.
What do tests do?
The quality of tests remains questionable as unless you see what each test does, you can’t say how good your tests are. But there is always the first step of doing code coverage.
Having thousands of tests doesn’t mean that you have good coverage and all methods are being tested. In the audit that I mentioned above, we found out that our overall LOC (lines of code) coverage was still 50%. Another thing that we found was that:
Module A has 500+ tests with LOC coverage 70%
Module B has 500+ tests with LOC coverage 40%
So just looking at number of tests, you can’t guess the coverage. A note here is that coverage is also not a perfect measure of quality but as I mentioned it is a good start until you can review each test.
Do you have thousands of tests in your project? What kind of challenges you have to deal with?