Simplify Compliance Workflows With New C/C++test 2024.2 & AI-Driven Automation | Register Now
Jump to Section
What Is Artificial Intelligence in Software Testing?
A lot has been said about artificial intelligence and how it’s transformed how we do things. When it comes to software testing, what's the place of AI? This post highlights how AI helps achieve robust software testing.
Jump to Section
Jump to Section
The emergence of artificial intelligence (AI) continues to transform the technological landscape. Its application in several facets of software development continues to grow. One of the areas of software development where the adoption of AI can advance is software testing.
Software testing is crucial in ensuring the release of software products that meet both compliance standards and user demands for quality. However, with many permutations surrounding the use of artificial intelligence, we’ll dive deep into uncovering what AI is in software testing.
- How does AI in the context of software test automation differ from its broader definition?
- What do we mean when we talk about AI and its sister term, machine learning?
- What are the benefits of using AI and machine learning to advance state-of-the-art API testing?
Let’s find out.
What Is AI & How Is It Changing the Dynamics of Software Testing?
Artificial intelligence is one of the most overloaded buzzwords in the digital marketplace. “AI” conjures up images of things like all-powerful supercomputers hell bent on human destruction, voice-control assistance in the way of Alexa or Siri, computer chess opponents, and self-driving cars.
Wikipedia defines AI research as “…the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.” But that’s a little too abstract.
I like to think of AI as the ability of a computer program or machine to think (reason for itself) and learn (collect data and modify future behavior in a beneficial way).
It’s in this definition that we start to see something more meaningful in the context of what AI means for software development tools and technology.
AI in Software Test Automation
Software testing is the process of evaluating software to ensure that it can do what it’s designed to do efficiently. It subjects software infrastructures to a series of functional and nonfunctional testing scenarios. When teams test software, they can discover and resolve runtime defects, scalability issues, security vulnerabilities, and more.
The software testing process is usually rigorous, hence the need for automation. However, for software automation to be superefficient and seamless, there is a need to incorporate AI.
The use of AI in software development is still evolving, but the level at which it’s currently used in software automated testing is lower compared to more advanced areas of work such as self-driving systems or voice-assisted control, machine translation, and robotics.
The application of AI in software testing tools is focused on making the software development life cycle (SDLC) easier. Through the application of reasoning, problem solving, and, in some cases, machine learning, AI can be used to help automate and reduce the amount of mundane and tedious tasks in development and testing.
You may wonder, “Don’t test automation tools do this already?” Of course, test automation tools already have AI in effect, but they have limitations.
Where AI shines in software development is when it’s applied to remove those limitations, enabling software test automation tools to provide even more value to developers and testers. The value of AI comes from reducing the direct involvement of the developer or tester in the most mundane tasks. We still have a great need for human intelligence in applying business logic, strategic thinking, creative ideas, and the like.
For example, consider that most, if not all, test automation tools run tests for you and deliver results. Most don’t know which tests to run, so they run all of them or some predetermined set.
What if an AI-enabled bot could review the current state of test statuses, recent code changes, code coverage, and other metrics, and then decide which tests to run and run them for you?
Bringing in decision-making that’s based on changing data is an example of applying AI. Good news! Parasoft handles automated software testing at this level.
How Machine Learning Enhances AI
So, what about machine learning?
Machine learning (ML) can augment AI by applying algorithms that allow the tool to improve automatically by collecting copious amounts of data produced by testing.
ML research is a subset of overall AI research with a focus on decision-making management based on previously observed data. This is an important aspect of AI overall, as intelligence requires modifying decision-making as learning improves. In software testing tools, though, machine learning isn’t always necessary. Sometimes an AI-enabled tool is best manually fine-tuned to suit the organization using the tool, and then the same logic and reasoning can be applied every time, regardless of the outcome.
In other cases, data collection is key to the decision-making process, and machine learning can be extremely valuable, requiring some data initially and then improving or adapting as more data is collected. For example, code coverage, static analysis results, test results, or other software metrics, over time, can inform the AI about the state of the software project.
Real Examples of AI & ML in Software Testing
AI and ML play key roles in Parasoft’s Continuous Quality Testing Platform. They’re important areas of ongoing research and development at Parasoft. Our findings continue to bring new and exciting ways to integrate these technologies into our platform and optimize test automation across every step of the SDLC. Here are a few ways we’ve leveraged them.
Using AI to Improve the Adoption of Static Analysis in Software Testing
One of the roadblocks to the successful adoption of static analysis tools is managing a large number of warnings and dealing with false positives (warnings that are not real bugs) in the results. Software teams that analyze a legacy or existing code base struggle with the initial results they get with static analysis and are turned off by this experience enough to not pursue further effort. Part of the reason for being overwhelmed is the number of standards, rules (checkers), recommendations, and metrics that are possible with modern static analysis tools.
Software development teams have unique quality requirements. There are no one-size-fits-all recommendations for checkers or coding standards. Each team has their own definition of false positive, often meaning “don’t care” rather than “this is technically incorrect.” Parasoft’s Continuous Quality Testing Platform includes a centralized reporting and analytics solution called DTP that applies AI and machine learning to prioritize the findings reported by static analysis to improve the user experience and adoption of such tools.
Parasoft DTP uses a method to quickly classify the findings in the output of a static analysis tool as either something that the team wants to see or something the team wants to suppress by reviewing a small number of findings and constructing a classifier based on the metadata associated with those findings.
This classifier is based on results from previous classifications of static analysis findings in the context of both historical suppressions of irrelevant warnings and prior prioritization of meaningful findings to fix inside the codebase.
The end results are classified in two ways:
- Of interest for the team to investigate.
- Items that can be suppressed.
This prioritization process greatly improves the user experience by directing developers to warnings that have the highest likelihood of being a real defect within their project.
Development teams can optionally integrate Parasoft DTP with OpenAI or Azure OpenAI providers to further streamline the triage of static analysis findings for Java applications with CVE match analysis. DTP analyzes the static analysis violation and quantifies the similarity between the source code of the method containing the violation and the source code with known security vulnerabilities. Development teams can use CVE match analysis when assessing which violations to prioritize so that critical security issues are not inadvertently missed.
We also implemented a hotspot detection engine together with an advanced AI-based model for assigning violations to developers matching their best skills and experience—learning from violations they fixed in the past. With these innovations, organizations can immediately reduce manual effort in their adoption and use of static analysis.
Using Generative AI to Fix Static Analysis Violations Faster
Parasoft incorporates generative AI into its Continuous Quality Testing Platform by integrating its C#, .NET, and Java static analysis solutions with OpenAI and Azure OpenAI providers. The optional integration enables development teams to more easily and quickly remediate static analysis findings through AI-generated fix recommendations in the IDE.
Generative AI code fixes for static analysis findings are particularly useful to new projects implementing static analysis that have code compliance requirements to adhere to industry-specific or security standards. Teams new to static analysis may not be technically familiar with the guidelines or rules associated with those standards. If the rule or violation is not properly understood, then development productivity may be impacted as developers must spend their time and efforts on researching the violation and implementing a code fix. By integrating generative AI to rapidly create code fixes, developers can quickly remediate violations. As a result, they get more time to focus on new code development and address other violations to elevate the overall code quality, security, safety, and reliability of the software.
Using Artificial Intelligence to Automate Unit Test Generation
Our Java developer productivity solution, Jtest, includes automated static analysis, unit testing, code coverage analysis, and traceability. It employs Parasoft’s own proprietary AI and the optional integrations with OpenAI and Azure OpenAI providers to streamline the creation of JUnit tests and drive code coverage.
Engineers can accelerate the creation of unit tests by leveraging the Eclipse or IntelliJ IDE plugin to generate new tests with Jtest’s AI-enhanced unit test assistant (UTA). Teams can build effective unit test suites by analyzing the existing levels of code coverage at the method, class, package, or project level, and leverage AI to bulk generate a suite of unit tests targeting uncovered lines of code. During test creation, our Java solution automatically generates mocks and assertions, providing teams with a suite of intelligent, effective test cases to run in regression testing.
As new code is developed, additional unit tests must be created to validate the new functionality. Jtest’s UTA will generate a new test case to cover the user-selected line of code and then provide recommendations on how to enhance the test case. Teams can easily and quickly stub and mock dependencies, add assertions, parameterize test cases, and clone or mutate existing test cases to drive higher levels of code coverage.
The optional integration with OpenAI and Azure OpenAI providers empowers developers with the ability to easily customize, extend, or refactor unit tests based on natural-language prompts that outline the developer’s specific requirements. Jtest’s integration with OpenAI will analyze the code and the existing unit test in conjunction with the inputted requirements and refactor the test case based on those specifications. This provides developers with great flexibility to customize test cases in any way required by their application and accelerates the process of creating effective and valuable test suites.
Using Artificial Intelligence to Automate Unit Test Generation & Parameterization
This first example is in Parasoft Jtest, our software testing solution for Java developers that includes static analysis, unit testing, coverage and traceability, and so on. Applying AI here, we released automatic test case generation, which helps developers fill in the gaps when starting from a sparse JUnit harness.
Parasoft Jtest’s IDE plugin adds useful automation to the unit testing practice with easy one-click actions for creating, scaling, and maintaining unit tests. By using AI-enabled Jtest, users can achieve higher code coverage while significantly cutting the time and effort required to build a comprehensive and meaningful suite of Junit test cases.
One way it does this is by making it easier to create stubs and mocks for isolating the code under test. The underlying AI enables Jtest to observe the unit under test to determine its dependencies on other classes. When instances of these dependencies are created, it suggests mocking them to the user to create more isolated tests.
Automatically creating the necessary mocks and stubs reduces the effort on one of the most time-consuming parts of test creation.
Parasoft Jtest also automatically detects code that isn’t covered by existing test suites and traverses the control path of the source code to figure out which parameters need to be passed into a method under test, and how subs/mocks need to be initialized to reach that code. By enabling this AI, Jtest can automatically generate new unit tests, applying modified parameters to increase the overall code coverage of the entire project.
Using AI & ML to Automate API Test Generation & Maintenance
Another good example, adding machine learning into the mix, is Parasoft SOAtest‘s Smart API Test Generator. It goes beyond record-and-playback testing, leveraging AI and machine learning to convert UI tests into complete, automated API test scenarios.
The Smart API Test Generator uses reasoning to understand the patterns and relationships in the different API calls made while exercising the UI. From that analysis, a series of API calls is constructed that represents the underlying interface calls made during the UI flow.
It then applies machine learning by observing what it can about the different API resources and storing them as a template in a proprietary data structure. This internal structure is updated by examining other test cases in the user’s library to learn different types of behavior when exercising the APIs, for example, an assertion or adding a particular header at the right spot.
The goal of AI here is to create more advanced tests, not just repeat what the user was doing, as you get with simple record-and-playback testing. Here’s how the Smart API Test Generator works:
- Recognizes patterns inside the traffic.
- Creates a comprehensive data model of observed parameters.
- Generates and enables automated API tests by applying learned patterns from other API tests to help users create more advanced automated test scenarios.
The resulting automated API tests are more complete, reusable, scalable, and resilient to change.
Using Generative AI for API Scenario Test Creation
Parasoft SOAtest can also be integrated with OpenAI or Azure OpenAI providers, which enables the use of generative AI technologies to streamline the creation of API scenario tests. Testers can input a service definition file into SOAtest and provide a natural-language prompt that outlines the specific requirements and business logic needed. The AI then analyzes the service definition file and generates a series of API scenario test cases mapped back to the use case described in the prompt.
This use of generative AI in API testing enables QA teams to increase the thoroughness of their testing efforts as it enables testers to codelessly and automatically generate advanced test scenarios.
Using Machine Learning to Self-Heal the Execution of Selenium Tests
Automatically validating the UI layer is another critical component of your testing strategy to ensure that the product is fully verified before going into production. The Selenium framework has been widely adopted for UI testing, but users still struggle with common Selenium testing challenges of maintainability and stability. This is where AI technologies and, particularly, machine learning, can help, providing self-healing at runtime to address the common maintainability problems associated with UI test execution.
We provide this functionality with Parasoft Selenic, which can “learn” about your internal data structure during your regular execution of Selenium tests. The Selenic engine monitors each run and captures detailed information about the web UI content of the application under test. It extracts DOM elements, their attributes, locators, and the like, and correlates them with actions performed by UI-driven tests. Selenic employs Parasoft’s proprietary data modeling approach, storing that information inside its AI engine. The model is updated continuously, analyzing historical execution of all tests to continue becoming “smarter.”
This is a critical time-saver in cases when UI elements of web pages are moved or modified significantly, causing tests to fail. With Selenic, AI heuristics used by the engine can “match” those changed elements with historical data represented by the model, and automatically generate “smart locators” that are resistant to changes to recover execution of Selenium tests at runtime. Information about these changes is automatically propagated through the model, and future generation of new locators is adjusted based on those changes.
In addition to this, Selenic can self-heal different types of “waiting conditions” addressing instabilities associated with performance characteristics of systems under test. It also measures time associated with running each test case on web pages and compares it to the historical average captured from the previous runs. In cases when deviation exceeds a certain threshold, an alert is flagged inside the report to notify a user about significant changes.
Optimize Test Execution With Test Impact Analysis
Test impact analysis (TIA) tools assess the impact of changes made to production code. They help reveal test cases affected by code changes. The primary benefit of TIA is that it removes the need to run tests on your entire code base after modifications have been made. This saves time and costs while keeping your development process running efficiently.
By integrating TIA technology into CI/CD pipelines, you can optimize the run of your automated tests and provide faster feedback to developers about the impact of changes on the quality of their project. Depending on the nature of products and the type of tests to be executed, you can leverage Parasoft’s AI-enhanced technology to optimize the execution of C#, .NET, and Java unit tests, Selenium web UI tests, API tests, or tests executed in third-party frameworks.
Conclusion
The explosive growth witnessed in the software market suggests that more software will continue to be released to solve problems in our daily business. However, for software to function efficiently and reach the market as quickly as possible, there is a need for automation and artificial intelligence in software testing. This is where Parasoft’s Continuous Quality Testing Platform comes in to provide AI-powered, ML-driven software testing solutions that integrate quality into the software development process to prevent, detect, and remediate defects early in the SDLC.
Learn more about accelerating test creation with AI.