Featured Webinar: MISRA C++ 2023: Everything You Need to Know | Watch Now
What Is Artificial Intelligence in Software Testing?
A lot has been said about artificial intelligence and how it’s transformed how we do things. When it comes to software testing, what's the place of AI? This post highlights how AI helps achieve robust software testing.
Jump to Section
The emergence of artificial intelligence (AI) continues to transform the technological landscape. Its application in several facets of software development continues to grow. One of the areas of software development where the adoption of AI can advance is software testing.
Software testing is crucial in ensuring the release of software products that meet both compliance standards and user demands for quality. However, with many permutations surrounding the use of artificial intelligence, we’ll dive deep into uncovering what AI is in software testing.
- How does AI in the context of software test automation differ from its broader definition?
- What do we mean when we talk about AI and its sister term, machine learning?
- What are the benefits of using AI and machine learning to advance state-of-the-art API testing?
Let’s find out.
What Is AI & How Is It Changing the Dynamics of Software Testing?
Artificial intelligence is one of the most overloaded buzzwords in the digital marketplace. “AI” conjures up images of things like all-powerful supercomputers hell bent on human destruction, voice-control assistance in the way of Alexa or Siri, computer chess opponents, and self-driving cars.
Wikipedia defines AI research as “…the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.” But that’s a little too abstract.
I like to think of AI as the ability of a computer program or machine to think (reason for itself) and learn (collect data and modify future behavior in a beneficial way).
It’s in this definition that we start to see something more meaningful in the context of what AI means for software development tools and technology.
More Software Releases Means More Software Testing
As the number of developers worldwide continues to surge, more software releases are expected to hit the software market. A recent report by Statista corroborates this expectation with a projection that suggests that the global developer population is expected to increase from 24.5 million in 2020 to 28.7 million people by 2024.
This portends that we’ll continue to see more software launches in the coming years. With this expected growth in the number of software releases comes the need to automate software testing.
Software testing is the process of subjecting a software infrastructure to a series of functional and nonfunctional testing scenarios. It’s a process of evaluating software to ensure that it can do what it’s designed to do efficiently. When teams test software, they can discover and resolve runtime defects, scalability issues, security vulnerabilities, and more.
The software testing process is usually rigorous, hence the need for automation. However, for software automation to be super efficient and seamless, there is a need to incorporate AI.
AI in Software Test Automation
The use of AI in software development is still evolving, but the level at which it’s currently used in software automated testing is lower compared to more advanced areas of work such as self-driving systems or voice-assisted control, machine translation, and robotics.
The application of AI in software testing tools is focused on making the software development life cycle (SDLC) easier. Through the application of reasoning, problem solving, and, in some cases, machine learning, AI can be used to help automate and reduce the amount of mundane and tedious tasks in development and testing.
Revolutionize Your API Testing Practice by Leveraging Artificial Intelligence
You may wonder, “Don’t test automation tools do this already?” Of course, test automation tools already have AI in effect, but they have limitations.
Where AI shines in software development is when it’s applied to remove those limitations, enabling software test automation tools to provide even more value to developers and testers. The value of AI comes from reducing the direct involvement of the developer or tester in the most mundane tasks. We still have a great need for human intelligence in applying business logic, strategic thinking, creative ideas, and the like.
For example, consider that most, if not all, test automation tools run tests for you and deliver results. Most don’t know which tests to run, so they run all of them or some predetermined set.
What if an AI-enabled bot could review the current state of test statuses, recent code changes, code coverage, and other metrics, and then decide which tests to run and run them for you?
Bringing in decision-making that’s based on changing data is an example of applying AI. Good news! Parasoft handles automated software testing at this level.
How Machine Learning Enhances AI
So, what about machine learning?
Machine learning (ML) can augment AI by applying algorithms that allow the tool to improve automatically by collecting copious amounts of data produced by testing.
ML research is a subset of overall AI research with a focus on decision-making management based on previously observed data. This is an important aspect of AI overall, as intelligence requires modifying decision-making as learning improves. In software testing tools, though, machine learning isn’t always necessary. Sometimes an AI-enabled tool is best manually fine-tuned to suit the organization using the tool, and then the same logic and reasoning can be applied every time, regardless of the outcome.
In other cases, data collection is key to the decision-making process, and machine learning can be extremely valuable, requiring some data initially and then improving or adapting as more data is collected. For example, code coverage, static analysis results, test results, or other software metrics, over time, can inform the AI about the state of the software project.
Real Examples of AI & ML in Software Testing
AI and ML are important areas of ongoing research and development at Parasoft. Our findings continue to bring new and exciting ways to integrate these technologies into our products. Here are a few ways we’ve leveraged them.
Using Software Testing AI to Improve the Adoption of Static Analysis
One of the roadblocks to the successful adoption of static analysis tools is managing a large number of warnings and dealing with false positives (warnings that are not real bugs) in the results. Software teams that analyze a legacy or existing code base struggle with the initial results they get with static analysis and are turned off by this experience enough to not pursue further effort. Part of the reason for being overwhelmed is the number of standards, rules (checkers), recommendations, and metrics that are possible with modern static analysis tools.
Software development teams have unique quality requirements. There are no one-size-fits-all recommendations for checkers or coding standards. Each team has their own definition of false positive, often meaning “don’t care” rather than “this is technically incorrect.” Parasoft’s solution to this is to apply AI and machine learning to prioritize the findings reported by static analysis to improve the user experience and adoption of such tools.
Parasoft uses a method to quickly classify the findings in the output of a static analysis tool as either something that the team wants to see or something the team wants to suppress by reviewing a small number of findings and constructing a classifier based on the metadata associated with those findings.
This classifier is based on results from previous classifications of static analysis findings in the context of both historical suppressions of irrelevant warnings and prior prioritization of meaningful findings to fix inside the codebase.
The end results are classified in two ways:
- Of interest for the team to investigate.
- Items that can be suppressed.
This greatly improves the user experience by directing developers to warnings that have the highest likelihood of applying to their project. We also implemented a hotspot detection engine together with an advanced AI-based model for assigning violations to developers matching their best skills and experience—learning from violations they fixed in the past. With these innovations, organizations can immediately reduce manual effort in their adoption and use of static analysis.
Using Artificial Intelligence to Automate Unit Test Generation & Parameterization
This first example is in Parasoft Jtest, our software testing solution for Java developers that includes static analysis, unit testing, coverage and traceability, and so on. Applying AI here, we released automatic test case generation, which helps developers fill in the gaps when starting from a sparse JUnit harness.
Parasoft Jtest’s IDE plugin adds useful automation to the unit testing practice with easy one-click actions for creating, scaling, and maintaining unit tests. By using AI-enabled Jtest, users can achieve higher code coverage while significantly cutting the time and effort required to build a comprehensive and meaningful suite of Junit test cases.
See how to save time creating Java unit tests & achieve higher code coverage leveraging AI.
One way it does this is by making it easier to create stubs and mocks for isolating the code under test. The underlying AI enables Jtest to observe the unit under test to determine its dependencies on other classes. When instances of these dependencies are created, it suggests mocking them to the user to create more isolated tests.
Automatically creating the necessary mocks and stubs reduces the effort on one of the most time-consuming parts of test creation.
Parasoft Jtest also automatically detects code that isn’t covered by existing test suites and traverses the control path of the source code to figure out which parameters need to be passed into a method under test, and how subs/mocks need to be initialized to reach that code. By enabling this AI, Jtest can automatically generate new unit tests, applying modified parameters to increase the overall code coverage of the entire project.
Using AI & ML to Automate API Test Generation & Maintenance
Another good example, adding machine learning into the mix, is Parasoft SOAtest‘s Smart API Test Generator. It goes beyond record-and-playback testing, leveraging AI and machine learning to convert UI tests into complete, automated API test scenarios.
The Smart API Test Generator uses reasoning to understand the patterns and relationships in the different API calls made while exercising the UI. From that analysis, a series of API calls is constructed that represents the underlying interface calls made during the UI flow.
Learn how to accelerate API test creation with artificial intelligence.
It then applies machine learning by observing what it can about the different API resources and storing them as a template in a proprietary data structure. This internal structure is updated by examining other test cases in the user’s library to learn different types of behavior when exercising the APIs, for example, an assertion or adding a particular header at the right spot.
The goal of AI here is to create more advanced tests, not just repeat what the user was doing, as you get with simple record-and-playback testing. Here’s how the Smart API Test Generator works:
- Recognizes patterns inside the traffic.
- Creates a comprehensive data model of observed parameters.
- Generates and enables automated API tests applying learned patterns to other API tests to enhance them and help users create more advanced automated test scenarios.
The resulting automated API tests are more complete, reusable, scalable, and resilient to change.
Using Machine Learning to Self-Heal the Execution of Selenium Tests
Automatically validating the UI layer is another critical component of your testing strategy to ensure that the product is fully verified before going into production. The Selenium framework has been widely adopted for UI testing, but users still struggle with common Selenium testing challenges of maintainability and stability. This is where AI technologies and, particularly, machine learning, can help, providing self-healing at runtime to address the common maintainability problems associated with UI test execution.
We provide this functionality with Parasoft Selenic, which can “learn” about your internal data structure during your regular execution of Selenium tests. The Selenic engine monitors each run and captures detailed information about the web UI content of the application under test. It extracts DOM elements, their attributes, locators, and the like, and correlates them with actions performed by UI-driven tests. Selenic employs Parasoft’s proprietary data modeling approach, storing that information inside its AI engine. The model is updated continuously, analyzing historical execution of all tests to continue becoming “smarter.”
Let AI-driven recommendations tell you what’s broken & how to fix it. Leverage ML to self-heal Selenium test execution.
This is a critical time-saver in cases when UI elements of web pages are moved or modified significantly, causing tests to fail. With Selenic, AI heuristics used by the engine can “match” those changed elements with historical data represented by the model, and automatically generate “smart locators” that are resistant to changes to recover execution of Selenium tests at runtime. Information about these changes is automatically propagated through the model, and future generation of new locators is adjusted based on those changes.
In addition to this, Selenic can self-heal different types of “waiting conditions” addressing instabilities associated with performance characteristics of systems under test. It also measures time associated with running each test case on web pages and compares it to the historical average captured from the previous runs. In cases when deviation exceeds a certain threshold, an alert is flagged inside the report to notify a user about significant changes.
Optimize Test Execution With Test Impact Analysis
Test impact analysis (TIA) assesses the impact of changes made to production code. It helps to reveal test cases affected by code changes. The primary benefit of TIA is that it removes the need to run tests on your entire code base after modifications have been made. This saves time and costs while keeping your development process running efficiently.
You can benefit from TIA technology during the execution of manual tests or you can leverage the integration of TIA-based tools with CI/CD pipelines. This can optimize the run of your automated tests and provide faster feedback to developers about the impact of changes on the quality of their project. Depending on the nature of products and the type of tests to be executed, you can leverage Parasoft’s AI-enhanced technology to optimize the execution of .NET and C# static analysis tests, Java unit tests, Selenium web UI tests, and API tests.
The explosive growth witnessed in the software market suggests that more software will continue to be released to solve problems in our daily business. However, for software to function efficiently and reach the market as quickly as possible, there is a need for automation and artificial intelligence in software testing. This is where Parasoft comes in.
At Parasoft, we provide AI-powered, ML-driven software testing solutions that integrate quality into the software development process to prevent, detect, and remediate defects early in the SDLC.