As organizations continue contending with the impact of COVID-19, businesses are working out how to redefine testing practices in the “new reality.” The ongoing pandemic has compromised organizations’ ability to test and deliver software in several ways.
First and most importantly, resource constraints continue to rise as many workers simply cannot work from home in the same capacity they do in an office. Additionally, as many enterprises are leveraging resources from global system integrators, different geographies have established specific rules about work conditions. Many organizations simply cannot contend with the impact, so leaders are looking for solutions to accomplish the same requirements for testing with fewer resources.
Next, organizations need to think about the ways that constrained and remote teams collaborate. It’s very interesting to notice how the continued isolation has created a sense of malaise that significantly impacts teams that work remotely. This is because, for those of us that worked in an office space, it was all too easy to walk into somebody’s office and strike up a conversation about the latest release.
This level of social interaction gave us the ability to discuss our day-to-day tasks as well as air concerns about quality and process. Working in a purely remote capacity constrains those activities and puts us into a state of either total isolation, or in the case of most organizations, total distraction. It’s challenging to discover the right ways to work with remote teams so that you can strike that balance between not enough communication and communication overkill. Having the right collaboration software, governance, and best practices help organizations thrive in the new reality.
Next, IT organizations have to radically rethink their software delivery mechanisms. “Mobile-first” starts becoming hypercritical for organizations delivering digital experiences to their customers. This is especially important because you can’t physically interact with your customers in a store. It severely impacts call centers. And digital presence now largely represents your brand. Everything moved to a purely digital realm: from ordering food through an app, online banking, ordering and having critical pharmaceuticals delivered, and even buying clothes. Organizations need to be able to rapidly develop and deliver these experiences at speed to this changing world, so they don’t lose connection with their customers.
Attached to this challenge is the consideration organizations must make about the actual delivery mechanics. While radically rethinking and designing the digital experiences for our customers, we need to think about how we develop, test, and deliver digital content through the DevOps pipeline.
The COVID-19 pandemic pushed many organizations to modernize their delivery mechanisms by shifting their software into cloud ecosystems and low-code development platforms so that geographically separated developers and testers can collaborate and iterate to deliver the best possible experiences.
We’re seeing a rise in migrations to platforms like Salesforce, Guidewire, Mendix, and others. Not just to enable rapid delivery but to take advantage of all the capabilities inherent in those platforms for a resource-constrained organization.
On top of that, as the way software development and deployment through the CI pipeline modernizes, we’re seeing a migration to cloud platforms such as Azure DevOps, Pivotal Cloud, and Amazon Web Services (AWS).
Software IT companies must endure. They must deliver highly interactive digital experiences to their customers at an accelerated rate with constrained resources. But something has to give.
Quite often, with these contending forces, you end up sacrificing quality in the process. It’s more important than ever to ensure that quality is a priority so that customers who directly interact with you via digital experiences don’t suffer. The best way to continue to provide quality experiences in a constrained world is to seek “efficiency modifiers” for your testing practice.
What are these “efficiency modifiers”? They come in several different forms:
There are so many things to test in a modern application, including the frontend UI, middleware services including databases, backend systems, and third-party dependencies. Each of these layers adds complexity to the overall testing process. Many software testing tool vendors offer solutions to test pieces of this architecture. But what becomes important is ensuring that you can accurately test each component in its entirety, all the way from when the first line of code is written through intelligent UI testing in the completed application.
A shortcut to designing and optimizing these necessary tests is to leverage artificial intelligence. Organizations are looking for intelligent solutions that incorporate artificial intelligence to optimize the test creation process. This can take the form of intelligent code scanning to identify bad practice in code as it’s written, automatically generating unit tests, identifying patterns and relationships in API sequences to create comprehensive testing scenarios. And, finally, using AI-powered self-healing at the UI layer to recover from changing application interfaces.
It isn’t enough to just create a whole bunch of tests. To rapidly validate the application, you need to understand how each one of the tests correlates to the business requirements so that you can understand the priority and how it correlates to the underlying code so that you can begin to understand test completeness.
So, a powerful efficiency modifier to a constrained testing team is to build a testing practice where test cases are tightly coupled to the business requirements and the development code to create a comprehensive and holistic view of quality.
Now, once you have a whole series of tests and you understand how the test results can be operated on from a priority perspective by linking them to the requirements, you need to be able to execute those tests in the most effective way possible. Most organizations will run their entire suite of tests overnight. Then spend half the next day poring through those results trying to determine whether something has actually gone wrong or whether there was some “automation noise.”
The best way to gain efficiency in your test execution is by performing smart test execution. That’s an execution of only those test cases that you need to run to validate the changes that were made to your application. By using technologies such as smart test execution, you can:
There are many pieces to this quality puzzle as listed above. Many of these testing practices are largely understood such as the ability to test databases or the ability to test a UI. But a discipline that’s often overlooked and left to the later stages of application testing is API testing.
API testing is the practice of validating interfaces in your application at the service or component level. These APIs are the mechanisms by which machines communicate with each other and often serve as a breaking point for applications once they are brought together. Especially in today’s world of service-oriented or microservice architectures, this critical integration point is of the utmost importance when it comes to creating a digital experience.
Typically, the mobile application is just a frontend to a whole series of services and those services are what is providing your critical business value. As such, organizations need to create a comprehensive API testing practice in parallel with the rest of their testing techniques.
This is easier said than done, however, because most API interfaces are poorly documented or contain a series of hidden and undocumented APIs. This makes it really challenging for testing teams to understand how to test all the APIs, in which sequence, and how to ensure that they’ve accurately covered the correct number of use cases.
Once an organization decides to embrace API testing as an efficiency modifier, the key is to start in a meaningful way. The best way to start this process is to identify an inventory of available APIs in your application architecture. Parasoft SOAtest’s smart API test generator enables you to discover APIs by recording interactions between the application and the API services.
The technology leverages artificial intelligence (AI) to provide the construction of meaningful API tests by understanding patterns and relationships in the API sequences. It then uses that to create automated API tests that to run continuously to validate the interactions between your various system components.
At the same time, you can create pure Selenium UI level tests with Parasoft Selenic. UI testing is an important component of the overall testing practice, but issues of maintainability can arise with a purely UI focused testing strategy. Parasoft Selenic uses AI to identify test script stability issues and can self-heal tests at runtime.
While not the focus of this conversation, combining the two components together ensures broad coverage of the application and helps you to gain confidence that your application interfaces are not at risk.
If you already have existing Selenium-based UI tests, you can use Parasoft Selenic to extract the relevant API calls and feed them into the API testing engine. By taking an inventory of available interfaces and creating automated tests for those interfaces you can jumpstart building an API testing practice.
This is a very complex problem. How do you know when you have tested enough? There are many debates around this subject, but I think it breaks down to three metrics.
Code coverage is largely easy to obtain. You instrument your application with a code level monitor and exercise your applications through the APIs. The code level monitor will identify classes and methods that are interacted with and deliver that information back into your reporting and analytics engine.
By their very nature, APIs do not expose all the available code functionality through the API, so your organization needs to identify which code is reachable by the APIs. Once you have this information you can then set a threshold for the level of code coverage you want to achieve through your API testing. Generally, 80% is a good level to achieve.
Code coverage is only part of the story though. You also want to look at API coverage. API coverage is a metric that indicates, of the total available APIs that are accessible, how many of these APIs are tested with your automated API tests. There may be many cases where, although you’re achieving a high level of code coverage, you still have risk in your application because you haven’t validated certain key APIs.
Perhaps these key APIs only touch a small portion of the code, so they get lost in the overall code coverage, but because they touch a critical component, they present a significant risk if they misbehave or are intentionally abused.
You can achieve API coverage through your automated testing solution by deriving the delta between services available in the service definitions, such as Swagger, open API, and others, against those endpoints that are accessed in your API tests. Through this metric, you will be able to see the total number of services covered against the total number of services available. Generally, 90% is a good level to achieve depending on the size of the APIs.
Finally, we need to talk about requirements coverage. Although code coverage and API coverage indicate what percent of the application you’re touching, they don’t indicate whether it’s achieving what you intended for your customers.
Requirements coverage is the process of associating requirements to test scenarios. You must establish that the automated test scenario validates the use case from a technical level. You would then be able to understand through execution whether all your requirements are covered. If not, which requirements remain uncovered? And what is their business priority?
One would argue that requirements coverage is the most important of the three techniques — ideally 100% coverage. But, in reality, you must use all three metrics in combination to fully understand when you have an acceptable level of risk for release.
Continuous feedback is vital in a remote work environment. We must be able to react to quality issues that manifest in our digital experiences in a meaningful way and as quickly as possible. Since APIs represent the closest you can get to the code without actually looking at sources, they represent a good first line of defense for quality engineering to identify when defects have been introduced into the application that could potentially propagate to users. Automated API testing allows you to validate your APIs on an ongoing basis. Potentially as a build step in your CI/CD pipeline. A key to ensuring this process is scalable is to embrace smart test execution.
As previously mentioned, smart test execution is a blanket term referring to the process of only executing the tests required to validate the changes. Those changes could come from the code or from the requirement.
By implementing smart test execution into your CI/CD or DevOps process, you can execute the appropriate API tests to validate your changing architecture. By not executing the entire suite of tests for each build, you can significantly reduce the amount of time between defect detection and remediation. These fast feedback cycles are vital in a resource-constrained world.
The world has changed. Let’s face it. It’s going to be like this for the foreseeable future. But we don’t need to see this as a time to fret. Rather we can use this as an opportunity for digital transformation.
By looking inward to our quality processes and identifying areas to add efficiency modifiers, we can come out of this pandemic in a much more favorable position. API testing is one of many practices an organization can embrace to provide valuable insight into the reliability and scalability of our applications.
To learn more about building an API testing practice, watch our on-demand webinar.
A Product Manager at Parasoft, Chris strategizes product development of Parasoft’s functional testing solutions. His expertise in SDLC acceleration through automation has taken him to major enterprise deployments, such as Capital One and CareFirst.