The potential for real synergy between SAST and DAST comes from your SAST and DAST tools supporting each other in a way that really drives to the heart of the Secure-by-Design application security methodology. So it’s not really SAST vs DAST, but rather, DAST-informed SAST. How does this work?
Let’s start with the basics of these two application security testing methodologies.
SAST is Static Application Security Testing, i.e., analyzing an application without running it. There are a variety of ways to do this, from human review to metrics analysis to pattern analysis to data flow analysis. This is considered white-box testing. Most commonly, SAST users are concerned with data flow analysis because it enables them to look for security flaws like tainted data before the application is complete.
In the early days of static analysis, there was a lot of emphasis on not just finding bugs, but finding suspicious or risky code constructs (see Meyer’s Effective C++) as well as enforcing software engineering standards. In the security world, SAST has come to mostly mean flow analysis. Essentially SAST is being used to find vulnerabilities, similar to DAST, but earlier in the SDLC.
Earlier testing is better because it costs much less, so the main advantage of SAST is that it can be done earlier – long before the complete application or system is ready. With SAST, you have deep internal knowledge of the code so you know exactly what code is involved with a problem.
On the downside, with SAST, your tests are not against a real system, and tools must synthesize data in order to try and drive coverage of a function or data path. Because of this, SAST tools have a risk of returning a false-positive report, meaning they can tell you that a piece of code has a vulnerability when it is actually safe. Additionally, SAST tools are usually specific to a particular language. This makes them expensive for organizations to build and maintain (primarily a problem for tool vendors) and means you need tools for each language used in your applications.
DAST is Dynamic Application Security Testing. This means testing a working application (or device), usually through its inputs and interfaces. Often this is black-box testing, in the sense that you’re using the application without looking deeply into its internals (source code).
The biggest advantage of DAST is that these are obviously going to be realistic tests that take into account the complete application and/or system from end-to-end. Secondly, DAST testing doesn’t depend on deep knowledge of code and the tools don’t require specific support for each language.
The big downside of DAST, on the other hand, is that it doesn’t include deep knowledge of your code. This means that when you find a problem, it can take some real time and effort to narrow down exactly which underlying code is causing the problem.
In addition, DAST is a “test” technology, meaning it happens after design and coding. So it’s a very good way to verify that an application is secure, but if it’s the primary way being used to secure the software, then you’re really trying to test security into your application, which is a sisyphean task. You can no more test security into an application than you can test quality into an application — that’s why new regulations like GDPR and upcoming FDA guidelines are relying on the Security-by-Design methodology.
Now that you know the difference between DAST vs SAST, and a few of their pros and cons, it’s time to see how you can make them work together. But why would you want to do this in the first place?
Having a strong SAST strategy that incorporates early detection checkers for weaknesses like CWE as well as secure coding standards like CERT is the most complete way to secure an application and stop having the same security problems over and over again. But in order to complement DAST, we can connect SAST with what DAST is doing, informing our SAST activities with information gained from DAST.
To better understand how it works, I like to think of software like an assembly line and start at the end of the line, using a 3-step improvement process for security. Phase 1 is better than nothing, but it’s nowhere near as good as Phase 3.
The first phase of application security is all DAST. For application security, we take the final application, build before release, and pound away at it, trying to break into it any way we can – this is DAST. If we find something, we evaluate how nasty it is and fix it when we can, releasing when we must. There’s a huge topic in itself around this issue (releasing software with known weaknesses and vulnerabilities), but I’ll leave that for another day.
Because this testing comes at the end, there is always time pressure, as well as extra difficulty in finding and remediating the underlying issue, but it’s definitely better to be doing this testing than not doing it at all, so it’s a good start.
The second phase on the application security journey to improvement adds SAST, to address that late-in-the cycle problem. How can we start security testing before the application is ready? SAST is our obvious answer. SAST checkers can run as soon as we have code. Data flow checkers in SAST can usually be directly correlated with the kinds of issues that DAST finds, so it’s easy to know what to look for and what it means when SAST finds a weakness.
This is a good next step, because we not only have more time to remediate, but also testing is closer to the source, so the time it takes to figure out what went wrong is much shorter. Our SAST is now taking the work of our DAST and doing it earlier.
Data flow is really just doing more test-security-in, so how do we get to the next level and combine SAST with DAST to complement each other? The third phase is where we actually realize the value of using both tools together.
To move past SAST vs DAST into a fully complementary situation, we can take the results from DAST to inform our SAST, adjusting our static analysis rule configurations and telling us what kinds of security weaknesses we need to be looking for. Using DAST this way, it can enable SAST to tell us everything we need about where the security vulnerabilities are coming from, how we can mitigate them, and how we can code in such a way that they don’t happen.
So how does this work? First, we need to perform root-cause analysis using the results from DAST. For example, with SQL injection, we need to make sure that data is sanitized as it comes in, so we don’t have to rely on chasing data through myriad paths to see if it can escape the cleansing. We also need to look at SAST standards like those in CERT so that we can both avoid constructs that might work but aren’t secure, as well as enforce good behaviors that will harden our application, even though they might not be necessary in normal (insecure) programming. Proper SAST rules prevent the problems found with DAST, and we keep learning from DAST about how to configure and tune our SAST.
By using SAST and DAST together, you end up with what I like to think of as a fail-safe mentality. So for example, without secure-by-design, before GDPR, we stored all user data without encryption, then had discussions about what particular data was worthy of extra protection, like passwords or social security numbers. In a secure-by-design, fail-safe environment, we take the opposite approach and encrypt everything, then have a discussion about what is safe to not encrypt. That way, the default behavior is safe or “fail-safe,” and you are successfully maximizing your SAST and DAST.
So be careful! SAST checkers that do DAST-like testing are the ones that catch the interest of users and analysts, but the bigger value is from the boring standards-based checkers that enforce proper secure behavior. These checkers move you from late testing to early detection, and all the way to actual preventative coding standards that harden your application. SAST can complement DAST by providing early mitigation and allowing DAST to be used primarily for verifying that the application is secure, rather than trying to break the application.
Arthur has been involved in software security and test automation at Parasoft for over 25 years, helping research new methods and techniques (including 5 patents) while helping clients improve their software practices.