From a 50,000-foot level, most static code analysis tools looks the same. They analyze code without executing it and find defects, vulnerabilities, and other issues.
All the tools generate warnings and reports. They usually integrate into IDEs and CI/CD/build systems. If you want to successfully integrate any coding tool into your day-to-day development and get the most return for your investment, it pays to fully evaluate your options.
When trying to determine which static analysis tool will work best, many evaluators take a common approach for selecting a tool for their group or organization. They run each tool on the same code, compare the results, then choose the tool that reports the most violations out-of-the-box.
This isn’t really a product evaluation. It’s a bake-off. And the winner isn’t necessarily the best tool for establishing a sustainable, scalable static analysis process within the team or organization.
In fact, many key factors that are the difference between a successful static analysis adoption and yet another failed initiative are commonly overlooked during these bake-offs.
Before you begin the search for a tool, take a brutally honest look at your organization. Assess the following:
What your organization needs. To be successful with static analysis it’s important to understand what it’s meant to solve.
Where your organization stands. It’s also important to know what the new tools are intended to solve and whether they fit in with your organization.
Choosing a static analysis tool for adoption and eventual integration in your development process requires effort and planning. It’s more than a technical review. The process requires an examination of how well the tool fits with your organization. It’s also important to evaluate the vendor that’s selling and supporting the tools.
Here are the criteria to consider during the technical evaluation of the candidate tools:
Read our whitepaper for more details about each:
Choosing the right vendor is as important as choosing the right tools. When an organization acquires a tool, they are committing to a relationship with the vendor of choice.
Behind most successful tool deployments, there’s a vendor dedicated to helping the organization achieve business objectives, address the challenges that surface, and drive adoption.
It’s important to consider several layers of vendor qualification and assessment across the span of the evaluation process. At this point, consider the following:
It’s also important to understand the vendor’s reputation in the market. Answer these questions:
A common question from prospective customers is: How many checkers does your product have?
The question implies that the quality of a tool depends on the number of different errors it covers. This is a poor measure for any tool, particularly static analysis tools.
Users of static analysis tools should really be concerned about how well each tool covers different error types, coding standards, and depth of analysis. A common example of this is each vendor’s claim of CWE Top 25 or OWASP Top 10 or MISRA C/C++ coverage by their tool.
It’s not uncommon to see vendors claim 100% coverage of popular coding standards. A claim that’s often misleading. Instead of being concerned about the number of checkers or rules, the real question should be: How well does a tool cover the types of coding issues you are concerned about?
Although coding standards like MISRA have roots in automotive, their adoption is spreading across other safety-critical domains. Along with SEI CERT C, it’s either required by the marketplace or used to de-risk your software development. Regardless of the use case, these standards are inevitably used to evaluate static analysis tools.
However, claims of coverage for each standard are open to interpretation because the standards don’t precisely define how a tool claims coverage. There’s value in diving into particular capabilities that may be important to your use case. If your project needs MISRA C, for example, each tool’s capability should be looked at in detail.
Consider the following evaluation on various open source and commercial solutions on their coverage of MISRA and CERT C standards.
Open source solutions show poor coverage, which isn’t surprising because their intent was never to follow such standards. However, the various commercial tools, which often claim support for these standards, aren’t really delivering. The real evaluation criterion that matters here is coverage of the standard—not the number of checkers needed to support the standard.
However, when using a test suite to measure coverage against a standard, you also need to consider the coverage of the test suite itself. The Juliet CWE Top 25 (2011) Coverage image below lists the common weakness enumeration (CWE) IDs and whether they’re covered by any tests in the Juliet C/C++ and Java test suites. You can clearly see that the test suite does not fully cover the important CWEs (Top 25)—this is common across many test suites.
An obvious question arises about the use of open source tools for a static analysis solution. There are a few key issues with FOSS to keep in mind. An evaluation needs to include costs for important features, services, and support that are lacking.
Details about costs and benefits of FOSS, in general, are available here, including issues like support, project activity and longevity, and scalability. If industry standards are important and external audits are part of your business, then FOSS solutions might not be an option.
When evaluating the results of each pilot project, the evaluation and final decision making should boil down to answering the following key questions:
Will the team really adopt it and use it? The best tool in the world won’t deliver any value if it’s not deployable, if developers won’t use it, or if it’s too much of a disruption to the project progress. Deciding how well something is adopted requires a comprehensive evaluation of not only the tools, checkers, and integrations, but also the vendor, their support, services, and training.
Will it address the problems the organization and the team is trying to solve? Deployment of new technologies requires a focus on what problems are trying to be solved, rather than just expecting that static analysis will address your issues.
Additionally, the expectations of the new technology to address the problem should be realistic. It’s important to quantify success and ROI. It’s important to determine ahead of time how success is measured: lost time, missed releases, or field support cases.
Is this a long-term solution? Evaluations are time-consuming and require team commitment. Full deployments require more time and commitment. Settling for a tool that’s “good enough for now” might save money in the short term but prove extremely costly in the long term.
Static analysis tool evaluations often end up as a bake-off where each tool is tested on a common piece of code and evaluated on the results. Although this is useful and technical evaluation is important, evaluators need to look beyond these results to the bigger picture and longer timeline.
Evaluators need to consider how well tools manage results, including easy-to-use visualization and reporting. Teams should clearly understand how each tool supports claims made in areas such as coding standards, for example.
“MISRA”, “MISRA C” and the triangle logo are registered trademarks of The MISRA Consortium Limited. ©The MISRA Consortium Limited, 2021. All rights reserved.
Arthur has been involved in software security and test automation at Parasoft for over 25 years, helping research new methods and techniques (including 5 patents) while helping clients improve their software practices.