Pitfalls of Continuous Integration
Continuous Integration (CI) has gotten fire in recent years and has been one of the features of agile teams. Simply put CI is a safety net to enable faster changes to the code. Since we are living in times where rate of change has increased than ever, we like to go for quicker releases and shorter life cycles. Wikipedia defines the concept well by saying “it is the practice of merging all developer working copies with a shared mainline several times a day” and lists all practices like merging very frequently, automating the build, self-running tests, every commit should be built and so on.
My favorite diagram to explain the concept is given below. Imagine that in a team, Ali works from Pakistan, Jim works from US and Rakesh adds value from India. Now whenever is the time of the day, Ali or Jim or Rakesh check in their code to the repository, this is what happens.
Thus if a single test fails which was passing earlier, the commit is denied and the respective programmer is notified.
Though it is believed that there are tools like Jenkins, Hudson etc. that enable CI, CI is a concept and you can do this with a simple batch file. The project that I’m working on these days, have build scripts written in Python that do this magic.
Okay, so this was the background and this looks like a very good solution for producing quality at high rates. But CI is not the end of quality measures rather it is one of them. Being at the receiving end of code, as a tester I can tell that teams that implement CI have better quality than others. But I have also observed some dangers lying within.
What tests can’t see, gets escaped. When you have 10,000 tests in project, you get in the trap of numbers. Unless you know the areas that are being covered by those tests, the code which is not touched by the tests will always have issues. One way of getting around this is periodic review of tests by the entire team specially skilled testers to see what else we can add to the suite.
The ignored hell. The design is such that if a single test fails, the build process fails. Thus when tests fail (which they can any time) and you are in need of a build for whatsoever reason, the tendency is to put those tests in ignored list. Tests that take time are also put in ignored list for faster delivery of the builds, so usually all performance tests are in ignores list. With time, these ignored tests get ignored by the team and present a risk. One approach is to keep a backlog item to review ignored tests at the end of each iteration.
Programmers have affiliation with code. The bulk of these tests are unit tests and even if they are scenario level or integration tests, they are mostly written by programmers. Some Project Managers gets so much obsessed with these tests that they start believing that they don’t need any specialized tester in the team and programmers can test all. The problem is that whoever creates something have an affiliation with it. Programmers are no exception and they love the code they have written. So all their tests are focused to prove what their code can do, rather than where the code fails. One trick that works is pairing these programmers with testers to test the code. Even better is to have testers in team that write code to test code. They’ll bring in all stuff that programmers can’t see.
Have you experienced working in a team that works in CI mode? Have you observed any success and cautions that you’d like to share.