As part of my series on the 7 habits of highly successful programmers, today I’m going to discuss a few ways to make sure that static analysis is effective and sustainable.
To begin at the beginning, it’s best to start with your static analysis policy. Some people wonder why this matters, but it’s actually crucial to a successful static analysis initiative. Your policy will cover things like which rules you should run, when you can ignore them vs. when you must fix them, how suppressions are done, etc. You also need to decide what code needs to be analyzed viz a viz all that legacy code you have just sitting around, which we’ll discuss shortly.
Without a consistent policy, static analysis quickly devolves into an occasionally-executed bugfinder, shrinking the value you get from it. If you’re in a compliance business like medical or automotive, you must do static analysis anyway, so you might as well do it right and get some value.
Getting the right tool matters. The tool needs to work inside your code editor (IDE), with the languages you’re using, on the operating system you’re using. The tool should support both server and in-IDE execution, in-IDE (for developers) and web-based reporting (for managers). You need to be able to configure the tool to do just the rules you want (not just the ones the tool vendor wants you to run). Mature tools come with out-of-the-box starting configurations to get you going with industry initiatives like MISRA, OWASP, CWE, and PCI.
Being able to extend the tool with your own rules, ideas, and best practices will be important to your long-term success as well. And don’t forget about technical support – can someone look at issues around noisy rules and help you fix them? Are you going to use open-source and make the fixes yourself? Do you normally use vendor services to help get your project jump-started and going in the right direction, fast?
These and more are all things to consider when selecting a tool. Don’t fall into the bake-off trap – it provides little useful information about whether a tool will really do what you need it to, and static analysis tools simply work differently.
As mentioned above, it’s easy to fall into the trap of just turning on whatever rules are in the static analysis tool you have. Some vendors actively encourage this, others even require it! Everyone’s software testing needs are different. A default config can get you started, but to be successful you have to make it your own.
A few things to do include turning off noisy rules and getting severity levels to match your own actual practices (you don’t want a tool to say something is critical when you think it’s low severity). One non-obvious hint is to turn off rules that you like but don’t plan to fix today. When you have the time to address the violations, then turn the rule on, not before. Ignoring them when they are being reported will only lead to frustration and bad habits by teaching developers that not all rules matter.
The rules you are running should relate to problems you’re having in the field, problems you might reasonably expect, security issues, things found during code review, and any compliance items – rules you MUST run.
Even when you have exactly the right rules, sometimes context makes a particular static analysis finding unimportant in one part of the code, while it’s still critical in another. You have a few choices here, least valuable of those are to get frustrated and turn the rule off. This is a bad choice if you know the rule can provide value in some areas. If your tool won’t let you configure and suppress properly, you chose the wrong tool – go back to step #2 above.
Another trap some fall into is engaging in some manual managing of the results and passing out the ones deemed important. This is labor-intensive, not scalable, not maintainable, and you will end up fixing things that are less important while missing ones that are more important. You need a data-driven process that will help you assess which violations need to be fixed and which can be safely ignored, and then a way to flag them so they don’t come back. Ideally this can be done by developers directly in the IDE where the code is, as well as by managers, architects, and the like, in a web-based UI.
Some put these suppressions in a separate system. This has the benefit of never changing the code, but carries the risk of requiring rework when you branch or refactor, or if there is some synchronization problem. Others make a change in the code in the form of a special comment. While this does require a code change, it is persistent and accurate, and also documents suppressions – which is great if you ever get audited. Think about your needs and pick the method that best suits your environment. If your tool doesn’t support both, go back to step 2.
Legacy code is related to suppressions. It’s important to decide how you want to handle old code. Again it depends on your needs, but I’ve described a few methods below.
Some organizations choose to only fix static analysis violations in legacy code if there is an outstanding field bug-report that touches those lines of code – nothing else in the file should be touched. If you have really old code, or the risk of change is high, this is an entirely reasonable policy. It will minimize risk.
Another idea is to fix everything in a file when you’re in there anyway. This brings about some risk of change, but if you have a good unit test suite (we’ll talk about that next time) then you don’t have to worry so much. On the positive side, any code you touch will be up to date with the latest required code standards and best practices. Stale code will be less stale than it was.
You could do a hybrid method as well, or simply ignore all old code before a certain date and just use static analysis going forward. Figure out what you can achieve and do it that way. Just think first about the risks and benefits of your proposed strategy.
Static analysis is one of the most valuable tools in your software quality/security arsenal. Getting it right brings great value, but getting it wrong can doom a generation of coders at your organization to using old outmoded quality and security processes. If you have questions, let me know in the comments below – I’m always happy to chat about how to make static analysis work.
“MISRA”, “MISRA C” and the triangle logo are registered trademarks of The MISRA Consortium Limited. ©The MISRA Consortium Limited, 2021. All rights reserved.
Arthur has been involved in software security and test automation at Parasoft for over 25 years, helping research new methods and techniques (including 5 patents) while helping clients improve their software practices.