Static analysis, or static application security testing (SAST), tools are a powerful way to discover defects in your codebase at the earliest stage of the development process. However, the tools used to perform that testing are blunt instruments.
SAST tools provide volumes of data that are Straight Out Of Tool. I think of it as SOOT.
The findings require an audit or review. And too frequently a manual triage from a person. Even if they’re well-intentioned, humans can’t keep up, and they tend to get overwhelmed.
Due to the amount of information it provides, engineers end up simply missing some of the most important vulnerabilities found by a static analysis tool because those flaws are hidden in all the noise (or SOOT).
Without any extra process, developers are given a bunch of static analysis findings, but they don’t know where to focus their attention. The amount of violations found is so vast that it feels like it’s just as useful to ignore the problems as it is to repair them.
When sifting through this kind of SOOT data, you’re trying to find the meaningful pieces of information — the diamonds that you can process and refine. But that only accounts for maybe 10 percent of your data. Trouble comes in when you try to reduce the amount of noise, also known as the false-positive ratio. To eliminate false readings, you need a tool that you can easily tune and configure for your specific environment while lowering the reliance on human intelligence and oversight.
False positives can occur for a variety of reasons. To learn more about false positives and the chaos they can create, read False Positives in Static Code Analysis.
All SAST tools start with you choosing the correct set of checkers and the configuration to work correctly with various kinds of codebases (including legacy code), along with assigning violation severity and categorization for defects.
The correct set is based on security guidelines, possible regulations, and things like problems you’re experiencing in the field or expect may occur. Then configuration to consider your frameworks, context, legacy code, and so on, help to produce a more useful result. These checkers typically have a basic default severity for simplistic prioritization.
With the context of your local code, builds, coding styles, and frameworks, you can customize the rules and configurations in your system to define the values and thresholds a static analysis tool will report back on. Parasoft goes beyond that by using risk models to refine the areas you should focus on.
Risk models provide an objective way to help determine the impact of a code defect’s exploitability, weakness, prevalence, and detectability. And also what kind of impact it might have on the application. The result is a larger scope of context to make sure to prioritize vulnerabilities that are easy to find and exploit. By combining severity and risk models, Parasoft tools can determine exactly how bad a problem will be, while providing recommended actions right out of the box.
With Parasoft static code analysis tools, you get a solution that mines the mountain of static analysis results to give you the Hope Diamond-sized jewels that you can act on. By taking the concept of severity in the context of the associated risks, Parasoft tools can apply data from industry-standard security models like CERT, CWE, or OWASP and bring them directly into the reporting and analytics dashboard.
By taking multiple risk models into account, we’re able to infuse context into your static analysis results, allowing you to focus on the big diamonds without having to manually filter out all the soot and not having to worry about what you might have missed (false negatives).
By leveraging artificial intelligence (AI) and machine learning (ML) technologies, Parasoft static analysis tools can identify hotspots and intersections between all of the found violations, so you can focus your effort on the part of the codebase that is the root cause for many other issues. Better yet, ML monitors and learns from the behavior of your own development teams to differentiate between what’s important, and what’s not.
Training your AI model based on the historic behavior of the development team, provides a multi-dimensional analysis of the findings, while ML clusters data to identify correlated, related, or similar violations. With the two technologies combined, you get something better. Something that learns which false-positive results to ignore and which true positives to highlight, so it can shrink this mountain of information down to a few, highly valuable diamonds.
For example, static analysis can reveal thousands of violations in a typical codebase, and though you might be able to identify hundreds of defects to address, you won’t be able to fix everything in the time you have. With AI and ML finding violation hotspots, you can fix multiple defects simultaneously by identifying the single piece of code that’s causing it all.
Training people to use static analysis tools is often looked at as an issue. It takes a deep understanding of a particular programming language to get the most benefit. That’s why Parasoft static analysis solutions are built with integrated training, education, and certification programs to quickly get your developers up to speed so they can minimize how many false positives are reported and focus their efforts on the work that matters, writing code rather than sifting through warnings.
By reducing the number of irrelevant results, the adoption of the tool will increase. It’s simply a matter of training your team that they’ll be getting the information they need without having to do any digging. If you give developers three things to fix that are clearly high priority and real, you get much better adoption than if you give them 300 violations with only 30 worthwhile defects to address. And if their hands come out of it clean, they’ll be happy to use the tool again and again. Instead of an annoying process required from above, it’s a trusted advisor and tool.
No matter how much automation there is in your static analysis, there’s always going to be an element of manual triage. The question is how deep you have to go before you find anything of value. But with tools that include risk metadata and equipped with AI and ML to make finding and fixing defects far more effective, you can quickly address violations at the beginning of your software development lifecycle to build safe, secure software.
Arthur has been involved in software security and test automation at Parasoft for over 25 years, helping research new methods and techniques (including 5 patents) while helping clients improve their software practices.