Gartner Research: API Testing, Service Virtualization & Continuous Testing
April 21, 2016
3 min read
As agile development practices mature and DevOps principles begin to infiltrate our corporate cultures, organizations realize the distinct opportunity to accelerate software delivery. However, when you speed up any process, immature practice areas and roadblocks become much more pronounced. It’s the difference between driving over a speed bump at 5 MPH versus 50 MPH … at 50 MPH, that speed bump is going to be quite jarring.
Accelerating any business process will expose systemic constraints that shackle the entire organization to its slowest moving component. In the case of the accelerated SDLC, testing has become the most significant barrier to taking full advantage of iterative approaches to software development. For organizations to leverage these transformative development strategies, they must shift from test automation to Continuous Testing. Drawing a distinction between test automation and Continuous Testing may seem like an exercise in semantics, but the gap between automating functional tests and executing a Continuous Testing process is substantial.
There are many nuances associated with the transformation from automated testing to Continuous Testing. In this post, let’s focus on three key distinctions:
- Aligning “test” with business risk
- Ubiquitous access to complete test environments
- Extreme test automation at the API/message layer
Aligning Test and Business Risk
The most fundamental shift required in moving from automated to continuous is aligning “test” with business risk. Especially with DevOps and Continuous Delivery, releasing with both speed and confidence requires having immediate feedback on the business risks associated with a software release candidate. Given the rising cost and impact of software failures, you can’t afford to unleash a release that could disrupt the existing user experience or introduce new features that expose the organization to new security, reliability, or compliance risks. To prevent this, the organization needs to extend from validating bottom-up requirements to assessing the system requirements associated with overarching business goals.
Ubiquitous Access to Complete Test Environments
One of the biggest constraints associated with exercising meaningful tests is accessing a complete test environment—including the myriad dependent systems that the application under test (AUT) interacts with. Given the composite nature of today’s applications, it is nearly impossible to stage a complete test environment. This is where Service Virtualization comes into play. Service Virtualization enables you to emulate the behavior of specific components in heterogeneous component-based applications such as API-driven applications, cloud-based applications, and service-oriented architectures. By simulating the AUT’s interactions with the missing or unavailable dependencies, Service Virtualization helps you ensure that data, performance, and behavior is consistent across the various test runs. Moreover, it also helps you “shift left” testing so it can begin much earlier in each iteration and expose defects when they’re fastest and easiest to fix.
As a general rule, you should be testing against the most production-like environment that you can access …if not in production. However, this typically presents a sizable challenge in terms of cost, security, and privacy. Using simulation technologies such as Service Virtualization allows you to bypass the constraints associated with the dependent systems outside of your control in order to run meaningful end-to-end tests.
Extreme Test Automation at the API/Message Layer
Testing at the API/message layer (services, message queues, database abstraction layers, etc.) offers several distinct advantages for enabling Continuous Testing at the speed of DevOps:
- Stability: While GUI tests often fail due to inconsequential application changes, a failure at the API/message level typically signals a fundamental flaw in the application logic—something likely to impact the core user experience. If you’re configuring a test suite failure to serve as a “gate” along the automated deployment pipeline, it’s important to ensure that every failure indicates a truly show-stopping problem.
- Speed: Traditional methods of testing, which rely heavily on manual testing and automated GUI tests that require frequent updating, cannot keep pace with the speed required for DevOps. Testing is delayed until the GUI is available, which is typically late in the process. Moreover, GUI tests are notoriously brittle and require significant updating with each application modification. API tests can be defined as soon as the service description (e.g., Swagger or RAML) is available, can be executed much earlier in the implementation process than GUI tests, and require minimal maintenance.
- Accurate risk assessment: In modern applications, the functionality exposed at the GUI layer is just the tip of the iceberg. The core of the application logic is controlled by the API/message layer. Without exhaustive testing of critical user transactions at the API/message layer, it’s hard to rest assured that today’s highly-distributed systems truly function as expected.