Load and Performance Testing in a DevOps Delivery Pipeline
By Sergei Baranov
July 11, 2017
6 min read
Performance testing is increasingly becoming part of the continuous delivery pipeline in DevOps settings. Here, we talk about performance testing and the best ways to include load and performance testing in the delivery of applications.
Jump to Section
In DevOps environments, it’s becoming a best practice to run performance tests as a part of the continuous delivery pipeline. So performance testing must be an integral part of continuous application delivery.
More and more teams are realizing that a regression in performance can have as big of an impact on application quality as a regression in functionality! So we bring our focus to performance testing and how to best integrate load and performance testing into application delivery.
Integrate Performance Tests Into the Continuous Delivery Pipeline
You can start integrating performance tests into the continuous delivery pipeline by adding selected performance tests to Jenkins, or a continuous integration tool of your choice, and having them run regularly.
Depending on your needs, you can run performance tests at one or more of the following points in the build/test infrastructure:
- After each build with a reduced set of performance “smoke” tests.
- Once a day with a more comprehensive set of performance tests.
- Once a weekend or based on infrastructure availability, with a set of long running tests for endurance testing or high volume load tests for stress testing.
This by itself, however, is not enough.
Manually analyzing load test reports can be time consuming and may require special skills not every developer possesses. Without the ability to automate load test report analysis, reviewing performance test results becomes a tedious time sink. Vital performance information may also get overlooked. In such scenarios, you may be running performance tests continuously, but the benefit of them will be limited.
How to Optimize Performance Testing With a Shift-Left Approach
Automate the Collection & Analysis of Performance Test Results
To get the full benefit of continuous performance testing, you need to set up an effective mechanism to analyze performance test results. Parasoft LoadTest and its LoadTest Continuum (a module of Parasoft SOAtest) provide you with tools that help automate the collection and analysis of performance test results, and give you insights into the performance of your application.
How to Set Up Your Environment for Continuous Performance Test Execution
The following steps will help you set up your environment for continuous performance test execution with Parasoft LoadTest and LoadTest Continuum:
- Review and configure LoadTest project QoS metrics for automation.
- Deploy and configure LoadTest Continuum for load test report collection.
- Configure LoadTest projects into batches for execution.
- Start running LoadTest project batches as a part of continuous integration, and use LoadTest Continuum to regularly review and analyze performance test results.
I will go through these steps individually in more detail below.
Step 1 – Review and Configure QoS Metrics for Automation
Parasoft LoadTest Quality of Service (QoS) metrics are one of the key features for automating the analysis of performance test results. QoS metrics reduce large amounts of data in a load test report to a set of success/failure answers about your application performance. Parasoft LoadTest offers a rich set of QoS metrics that range from ready-to-use threshold metrics to custom-scripted metrics that allow you to use the LoadTest API for advanced load test data analysis.
To prepare your performance tests for automation, you need to review the QoS metrics in your LoadTest projects. Run a LoadTest project and examine the report: all success and failure criteria that you use to manually analyze a load test report should be represented as QoS metrics. Convert as many metrics as you can into “numeric” metrics. A numeric QoS metric not only returns a success/failure result, but also quantifies a key performance indicator for that metric. For instance, a metric that validates a CPU utilization threshold would also provide the actual CPU utilization value as a numeric metric.
Numeric metrics are widely used in LoadTest Continuum to plot metric performance over time:
Fig 1. Numeric metric results plotted in a LoadTest Continuum report.
Once you’ve configured the QoS metrics for your LoadTest projects, it is time to set up the LoadTest Continuum for performance data collection and analysis.
Step 2 – Deploy and Configure LoadTest Continuum
Deploy and configure the LoadTest Continuum ltc.war Web application archive (available in the SOAtest/LoadTest install directory starting with version 9.10.2), as described in the “LoadTest Continuum” section of the LoadTest documentation.
Step 3 – Configure LoadTest Projects Into Batches for Execution
Combine your LoadTest projects into .cmd scripts for batch execution. LoadTest .cmd scripts are how you can specify groups of projects that will make up different sets of performance tests, such as the “smoke” tests, daily tests, or weekend tests mentioned previously.
Configure the .cmd scripts to send report data to LoadTest Continuum as described in the “Sending Reports to LoadTest Continuum” section of the LoadTest documentation. Set up your continuous integration tool to run LoadTest .cmd scripts as a part of a build process or at regular intervals. For instance, in Jenkins you can run a LoadTest .cmd script using Execute Windows batch command build step as follows:
%SOATEST_HOME%\lt.exe” -J-Xmx4096M -cmd -run “%WORKSPACE%\ltcontinuum.cmd
Step 4 – Set Up a Dashboard in Parasoft DTP
Parasoft DTP contains reporting and analytics dashboards that enable you to monitor the health and progress of your software project with a variety of widgets and reports.
A Parasoft LoadTest Continuum DTP widget allows you to add the most recent LoadTest results summary to your DTP project dashboard, and offers a quick way to evaluate the state of the performance test results in your daily project state review routine.
The widget displays the number of total, passed, and failed tests and metrics for the most recent LoadTest runs. To view the results in more detail, click on the project link in the widget, and the LoadTest Continuum page will open in a new tab.
Fig 2. LoadTest Continuum widgets in a DTP dashboard.
To set up a LoadTest Continuum Custom HTML Widget in DTP, you can simply follow these steps:
- In the Parasoft DTP Report Center, create a new Dashboard or open an existing one.
- Press Add Widget. In the Add Widget dialog, select Custom -> Custom HTML Widget.
- Copy the content of the following file from the LoadTest Continuum installation into the HTML text area of the dialog: %TOMCAT_HOME%\webapps\ltc\dtp\ltc_dtp_widget.html
- Modify the HTML with your custom settings:
- Locate the getServerURL() function. Modify the return value with the host and port of your LoadTest Continuum installation.
- Locate the getProjectName() function. Modify the return value with the name of the project that you would like to track in the widget.
- Press Create.
Step 5 – Review & Analyze Performance Test Results
Parasoft LoadTest Continuum serves as both a collection point for your LoadTest reports and an analysis tool that organizes load test data from multiple runs. LoadTest Continuum organizes the data into a pyramid of information that allows you to review your performance test results at various levels of detail, from high-level daily summaries at the top, to QoS metrics results at the core, to detailed load test reports at the bottom:
Fig. 3. The LoadTest Continuum daily summary and test metrics view.
Consider the following workflow as an example of a regular (daily) test review:
- For failed tests, go through the following steps:
- Open the test History view, check if the test has been failing regularly or sporadically. The first case would likely indicate a regression; the second case an instability.
- Inspect the failed metrics of the test:
- For a numeric metric open the Metric history graph view. Use metric history graph for insights. For instance, if a test to which the metric belongs is unstable small fluctuations of the metric graph usually indicate that metric threshold needs adjustment. Large fluctuations indicate problems in code or infrastructure.
- Open the All graphs of this test link. Check the graphs of other numeric metrics for the same test for fluctuations that did not cross the metric threshold.
- If you have not set up the LoadTest Continuum DTP widgets, start with checking success/failure summaries of tests and metrics in the main LTC project page.
- For projects that have failures, follow the link to the LTC project page to examine the details.
- Start with checking the state of your most recent load test runs in the LoadTest Continuum DTP widgets.
- Do the same for the All graphs of this metric link to check if similar metrics of other tests were affected. If yes, this indicates a systemic issue with your application or infrastructure not limited to a single test (See Fig. 4).
- For a more in-depth analysis open the HTML or binary Load Test reports of the failed test.
Fig. 4. Load Test Continuum All graphs of the same metric view show performance improvement of the CPU% metric across multiple tests.
Integrating a performance testing process into the continuous delivery pipeline is essential to ensure software quality. To get the full benefit of this process you need to set up an effective mechanism for performance test results analysis automation.
Be Continuous With Parasoft
You can get set up with Parasoft LoadTest and LoadTest Continuum inside of Parasoft SOAtest, which provides everything you need to achieve all of your lofty test result analysis automation goals. With sophisticated automation within functional testing, you can get to higher quality software.
- Be a Smarter Software Tester with These 5 Delicious Technology Combinations
- What is DTP and why is it so powerful?
- Learn more about Parasoft SOAtest and Parasoft LoadTest
- Get a free trial