Today’s DevOps and “Continuous Everything” initiatives require the ability to assess the risks associated with a release candidate—instantly and continuously. Continuous Testing provides an automated, unobtrusive way to obtain immediate feedback on the business risks associated with a software release candidate. It guides development teams to meet business expectations and helps managers make informed decisions in order to optimize the business value of a release candidate.
Continuous Testing is NOT simply more test automation. Given the business expectations at each stage of the SDLC, Continuous Testing delivers a quantitative assessment of risk as well as actionable tasks that help mitigate risks before they progress to the next stage of the SDLC. The goal is to eliminate meaningless activities and produce value-added tasks that drive the development organization towards a successful release—safeguarding the integrity of the user experience while protecting the business from the potential impacts of application shortcomings.
DevOps.com recently published a new Continuous Testing article by Parasoft’s Wayne Ariola. Here’s an excerpt from that article…
As agile development practices mature and DevOps principles infiltrate our corporate cultures, organizations are realizing the distinct opportunity to accelerate software delivery. However, when you speed up any process, immature practice areas such as testing and roadblocks become much more pronounced. It’s the difference between driving over a speed bump at 5 mph vs. 50 mph … at 50 mph, that speed bump is going to be quite jarring.
Accelerating any business process will expose systemic constraints that shackle the entire organization to its slowest moving component. In the case of the accelerated software development life cycle (SDLC), testing has become the most significant barrier to taking full advantage of more iterative approaches to software development. For organizations to leverage these transformative development strategies, they must shift from test automation to continuous testing.
Drawing a distinction between test automation and continuous testing may seem like an exercise in semantics, but the gap between automating functional tests and executing a continuous testing process is substantial. This gap will be bridged over time as the process of delivering software matures. Both internal and external influences will drive the evolution of continuous testing. Internally, agile, DevOps and lean process initiatives will be the main drivers that generate the demand for change. Externally, the expense and overhead of auditing government and industry-based compliance programs will be the primary impetus for change.
Any true change initiative requires the alignment of people, process and technology—with technology being an enabler and not the silver bullet. Yet there are some basic technology themes we must explore as we migrate to a true quality assurance process. In general, we must shift from a sole focus on test automation to automating the process of measuring risk. To begin this journey, we must consider the following:
Driven by business objectives, organizations must shift to more automated methods of quality assurance and away from the tactical task of testing software from the bottom up.
With quality assurance (QA) traditionally executing manual or automated tests, the feedback from the testing effort is focused on the event of a test passing or failing—this is not enough. Tests are causal, meaning that tests are constructed to validate a very specific scope of functionality and are evaluated as isolated data points. Although these standalone data points are critical, we must also use them as inputs to an expanded equation for statistically identifying application hot spots.
The SDLC produces a significant amount of data that is rather simple to correlate. Monitoring process patterns can produce very actionable results. For example, a code review should be triggered if an application component experiences all of the following issues in a given continuous integration build:
The ping-pong between testers and developers over the reproducibility of a reported defect has become legendary. It’s harder to return a defect to development than it is to send back an entrée from a world-renowned chef. Given the aggressive goal to accelerate software release cycles, most organizations will save a significant amount of time by just eliminating this back and forth.
By leveraging Service Virtualization for simulating a test environment and/or virtual machine record and playback technologies for observing how a program executed, testers should be able to ship development a very specific test and environment instance in a simple containerized package. This package should isolate a defect by encapsulating it with a test, as well as give developers the framework required to verify the fix.
The current tools and infrastructure systems used to manage the SDLC have made significant improvements in the generation and integration of structured data (e.g., how CI engines import and present test results). This data is valuable and must be leveraged much more effectively (as we stated above in the “From Causal Observations to Probabilistic” section.
The wealth of unstructured quality data scattered across both internal and publicly-accessible applications often holds the secrets that make the difference between happy end users and unhappy prospects using a competitor’s product…
Are you ready to evolve from automated testing to Continuous Testing? Read the new 70-page book Continuous Testing for IT Leaders to learn how Continuous Testing can help your organization answer the question, “Does the release candidate have an acceptable level of business risk?”
You’ll learn how to:
This book provides a business perspective on how to accelerate the SDLC and release with confidence. It is written for senior development managers and business executives who need to achieve the optimal balance between speed and quality.
Parasoft’s industry-leading automated software testing tools support the entire software development process, from when the developer writes the first line of code all the way through unit and functional testing, to performance and security testing, leveraging simulated test environments along the way.