Is your development team overwhelmed by the mounting number of violations from your static analysis tool? Has the high level of noise being generated by your current static analysis configuration desensitized the team to all alerts — including those for issues that you consider critical?
Here are 10 ways to freshen up your existing static analysis implementation—no matter what static analysis tool you’re using.
Checking a lot of rules is not the secret to achieving the best ROI with static analysis. In fact, in many cases, the reverse is true. Static analysis actually delivers better results if you focus on a minimal-but-meaningful set of rules.
When you perform static analysis, it’s like you’re having an experienced developer stand over the shoulder of an inexperienced developer and give him tips as he writes code. If the experienced developer is constantly harping on nitpicky issues in every few lines of code, the less experienced developer will soon become overwhelmed and start filtering out all advice, both good and bad. However, if the experienced developer focuses on one or two issues that he knows are likely to cause serious problems, the less experienced developer is much more likely to remember what advice was given, start writing better code, and actually appreciate receiving this kind of feedback.
It’s the same for static analysis. If you keep it manageable and meaningful, you’ll end up teaching your developers more and having them resent the process much less. Would you rather have a small set of rules that are followed or a large set that is not? If you don’t truly expect the developers to clean violations of a rule as soon as they are reported, you might want to seriously consider disabling that rule.
If a particular rule is being violated repeatedly, now is a good opportunity to re-evaluate whether you really want to continue checking that rule. An excessive number of violations indicates that the developers are not writing code in the way that the rule requires. Convincing them to change their coding habits could meet a fair amount of resistance.
How can you determine if pressing the issue will be worth the effort? First, try to remember why you started checking for that problem in the first place. Did you select it because it seemed like a good way to address issues you’re experiencing in the field? As part of your regulatory compliance efforts? Or simply because it was enabled by default by the vendor? Vendors typically provide the reference for each rule in their rule descriptions. Reading these descriptions can help you determine if the rule is really a good match for your projects and goals.
Next, see if there’s an alternative way to achieve the desired result. Is there an alternative rule that’s more specific? Is there a way to fine-tune the rule parameters so that it’s not firing so often? (More on this in tip#6). You might even consider writing your own rule that will be a better fit— or have the vendor create a custom rule for you.
If you’re still interested in checking this rule after re-examining its benefits and exploring its alternatives, get some development feedback on what would be involved in following this rule. You can then use this feedback to determine if it’s truly worth it to require developers to follow this rule. If it looks like a lot of work for little benefit, go ahead and disable the rule.
In some cases, you might be committed to following a rule, but want to allow exemptions under certain circumstances. For example, maybe you have a rule that requires some extra level of validation to be performed in the code. Assume you have a certain method with performance-critical code that is called hundreds of times a minute—and you’ve already verified that an appropriate level of validation is performed before this particular method is called. Or, assume that flow-based analysis is warning you about a serious problem in a path that you are 100% certain cannot be taken in the integrated application. This is where suppressions come in handy.
Suppressions are ideal for situations when you want to check for something, but you don’t care about reported problems under exceptional situations. With suppressions, you can continue checking for a critical issue without receiving repeated messages about your intentional rule violations. If you do not want to receive error messages for any violations of a specific rule, we recommend that you disable the rule altogether (see point 1).
You can typically define suppressions from the static analysis tool GUI, a configuration file, or the source code itself. When suppressions are defined in source code:
You can typically suppress violations of a specific rule, a number of rules, or all violations in a specific category. You could also exempt certain sections of code from all static analysis (more on this in the following point).
Sometimes it just doesn’t make sense to run static analysis on certain files—for instance, automatically-generated files or legacy files that you don’t plan on touching. In these cases, you should prevent these files from being analyzed. This is yet another way to ensure that your results aren’t cluttered with a bunch of violations you’re not planning on fixing.
There are a few ways to do this. You could set up path filters to exclude files you don’t want to check or include only the ones you do want to check. Or, you could configure your tool to skip files that contain a certain comment—such as a comment indicating automatically-generated code.
Other ways to focus your checking include:
With pattern-based static analysis, false positives are rule violations that are erroneously reported when the code actually follows the rule. For example, if the rule says you have an unclosed resource (such as a JDBC connection), when in reality the connection is closed, then this is a false positive. If you encounter an issue like this on a rule that you really want to follow, spring cleaning is a great time to finally report it to your vendor.
Note that if you’re going down this path, you need to be certain that you’re looking at an actual false positive, rather than simply a rule that you don’t like. Developers frequent call a message a “false positive” because they don’t like the rule, or don’t feel it applies in this instance. Such messages are not false positives and your vendor will not be able to help you in these cases.
However, if you can reduce a simple test case that shows how a particular rule is actually getting a false message, you should find most vendors are very helpful in remediating the problem.
Another way to reduce the noise factor is to customize rule parameters. For example, assume that your team is doing Android development and you’re checking an Android rule that says “Make sure that widgets aren’t updated too often.”
With the default settings, this rule identifies code that sets a widget to update more than 4 times per day. It does this by flagging code that sets the element [android:updatePeriodMillis] in the tag [appwidget-provider] to a number smaller than 216,00,000.
Assume that getting updated information is critical to your application, so you’re willing to sacrifice some battery consumption for more frequent updates. In this case, you might want to be warned only if updates occur more than 8 times per day. To achieve this, you could simply update the “Maximum update time maximum in milliseconds” rule parameter from 21,600,000 ms (6 hours) to 10,800,000 ms (3 hours).
As tip #2 mentions, if the rule doesn’t have the parameters that you need, you can write a new one that does—or have your vendor (or a consultant) write a custom rule for you.
Are you tired of remembering that Security 123 is equivalent to your ACME 3.1 guideline? That both Performance 987 and Performance 567 are related to your ACME 5.6 guideline? That even though your tool says Threads 123 is severity 4, your organization considers violations of that rule to be a very severe defect?
If so, spring cleaning is a great time to map your vendor’s static analysis rule set to match the distinct policies defined by your team and/or organization. Most static analysis tools let you customize rule severities, IDs and names—as well as create new rule categories—so that the deployed rules precisely match the contents of your own coding policy document.
If your organization performs static analysis as part of a compliance effort, this will make your reporting a lot easier.
Every team has a policy, whether or not it’s formally defined. You might as well codify the process and make it official. After all, it’s a lot easier to identify and diagnose problems with a formalized policy than an unwritten one.
Ideally, you want your policy to have a direct correlation to the problems you’re currently experiencing (and/or committed to preventing). This way, there’s a good rationale behind both the general policy and the specific ways that it’s implemented.
With these goals in mind, the policy should clarify:
Once you’ve cleaned out the clutter and you’re at the point where the team is used to performing static analysis, few issues are being reported, and those reported issues are being cleaned up promptly, you can take the next step and expand the scope of checking.
One way to expand the scope of checking is to add in more rules that are critical to your projects and goals. To zero in on what rules to add, consider:
Another way to increase the scope of checking is to check additional code. If you initially set your static analysis tool to skip legacy files (e.g., skip any files that were not added or modified after the “cutoff” date when you began static analysis), you might want to consider moving back that cutoff date—or eliminating it altogether.
The more you can automate the static analysis process, the less it will burden developers and distract them from the more challenging tasks they truly enjoy. Plus, the added automation will help you achieve consistent results across the team and organization.
Many organizations follow a multi-level automated process. Each day, as the developer works on code in the IDE, he or she can run analysis on demand — or configure an automated analysis to run continuously in the background (like spell check does). Developers clean these violations before adding new or modified code to source control.
Then, a server-based process double checks that the checked in code base is clean. This analysis can run as part of continuous integration, on a nightly basis, etc. to make sure nothing slipped through the cracks. With such a server-based process, it’s important to avoid the old paradigm of sending email to developers. Part of an effective workflow is distributing the error messages to the same UI where the developer writes code. Email forces extra steps, leading to missed violations, time wasted finding the proper line in the file, and more resentment from coders who feel like they’re doing something extra outside of their regular process.
To further optimize the workflow through automation, consider:
Arthur has been involved in software security and test automation at Parasoft for over 25 years, helping research new methods and techniques (including 5 patents) while helping clients improve their software practices.