Agile is often mis-sold to senior management as a way of achieving quicker time-to-market, when the objective is really more accurate delivery to market. The dirty secret that we don’t tell anyone is that this actually comes at a cost… slower time-to-market! Yes, we are releasing more often (i.e. “sooner”) but it ultimately takes longer to get the complete functionality to market. Why is it taking longer when all we are doing is breaking up the problem into smaller pieces? Well, by far the biggest culprit is late-cycle defect detection and the bottlenecks introduced by measures to reduce the risk.
Much of the promised velocity of agile development is let down by incremental code changes, specifically their impact on testing and overall system stability. Each sprint typically concludes with a dash to the finish line, as QA/testing focuses on validating the new functionality implemented. Then, due to a lack of understanding of the indirect impact that code changes have, organizations need to do a full regression as the release approaches. This often uncovers numerous issues late in the cycle, resulting in late hours and difficult business decisions.
There has to be a better way!
Due to the complexity of today’s codebases, every code change, however innocuous, can subtly impact application stability and ultimately “break the system.” These unintended consequences are impossible to discover through manual inspection, so testing is critical to mitigate the risk they represent, but unless you understand what needs to be re-rested, you can’t achieve an efficient testing practice. If you are testing too much each sprint, you’re losing many of the gains made by agile development. If you are testing too little, you expose yourself to late-cycle detection.
What is needed is a way to identify which tests need to be re-executed and focus the testing efforts (unit testing, automated functional testing, and manual testing) on validating the features and related code that are impacted by the most recent changes. Using a combination of Parasoft’s code analysis engines (Jtest, C/C++test, dotTEST) and the Process Intellligence Engine (PIE) within Parasoft DTP, developers and testers can understand the changes in the code base between builds, and get to the promise of Agile. This is called Change-Based Testing.
The key is knowing which tests are available to validate the code changes, and here is where Parasoft’s correlated coverage delivers the goods. By understanding which of these files have changed and which specific tests touched those files, DTP’s analysis engine (PIE) can analyze the delta between two builds and identify the subset of tests that need to be re-executed. The image below shows a widget from the DTP dashboard that displays a pie chart of the results from CBT analysis. This chart shows the subset of tests that are available to validate the code changes, categorized by their test status: passed, failed, incomplete, and in need of retest.
This high-level view indicates that there are a number of failures that the modified code has introduced and that there are a number of tests that have not yet been executed but are available to further validate the changes.
A status of Pass, Fail, or Incomplete indicates that these tests were already executed against the build, either as part of a fully automated test process (such as a CI-driven build step) or while testing the new functionality. The tests with the status of retest, however, are either manual tests that were not yet executed or tests that are part of automation runs that are not scheduled to execute during the current sprint.
Digging deeper into the chart, we can quickly get insight into where in the code the changes have occurred, how the existing tests correlate to those changes, and where testing resources need to be focused.
From here, we can create a test plan, addressing failed and incomplete test cases with the highest priority, and using the re-test recommendations to focus scheduling of additional automated runs and prioritizing manual testing efforts.
The Violation Explorer in DTP provides the interface for defining and managing the test plan. Navigating the tests and results, the Explorer reveals details on each test case. Using the Prioritization view to set the test metadata, users can assign owners, actions, and set priority to each of the test cases.
So, how does this help an agile process? Simply, it’s the ability to quickly and succinctly identify where your testing resources need to be applied. By testing only what’s needed versus everything (or simply guessing) testing time is greatly reduced. Quality goes up and the sprint gets done on time.
How would this work in practice? While the outcomes of Change Based Testing (CBT) analysis can be used in several different ways, I would suggest the following workflow as being the most practical for focusing sprint-based testing efforts;
We need to boost testing productivity in agile development. Testing is a major bottleneck for continuous delivery with too many defects being identified at the end of the release cycle due to wrong testing. To yield the best results, focus testing efforts on the impact of the changes you’re making, and unlock agile to accelerate delivery to market.
Mark Lambert leads strategic initiatives at Parasoft where he focuses on identifying and developing testing solutions and strategic partnerships for targeted industry verticals to enable clients to accelerate the successful delivery of high quality, secure, and compliant software. Since joining Parasoft in 2004, Lambert has held several positions, including VP of Professional Services and VP of Products.