Guest View: Use static analysis to secure open source

graph-5000784_640.png

Traditional security tools can help here, but open source is public, transparent, cloud-based, and collaborative. This lends itself to a new way of certifying software: Continuous Assurance. In this approach, automated tools and processes ensure that, as code changes, it continually satisfies compliance, quality, and security requirements. It’s the GitHub-era agile development approach to security and code quality. Continuous Assurance integrates directly into development and benefits from the always-up-to-date nature of cloud services, making it a perfect match for open source.

Google and Facebook pioneered the first scaled implementations of Continuous Assurance, and have extensively shared their learnings and open sourced several tools like Infer and Error- Prone from those initiatives. Their findings boil down to these three key principles.

1. Developers First. Developers are the only ones who can fix bugs. Bug reports need to be targeted at developers, not security or compliance experts. Also, as Facebook learned, a focus on new/changed code, rather than generating long lists of pre-existing errors makes the best use of developer attention and gets bug reports fixed rather than ignored (see their report of a tool that went from 0% to 70% fix rate just by focusing on diffs during code review).

2. Use Many Tools. Unfortunately, there is no one tool to rule them all. Every project’s code base is different, whether because of the language make-up, the bugs it cares about, or a million other reasons. And fortunately, the open-source community has created lots of analyzers for different languages, problem domains, resource constraints, etc. But these tools have limited uptake. Why is open source not using more open-source analysis tools? Ease of use is one factor, but cloud-based analysis services address this blocker. Just as open source relies on community code contributions, it should rely on those same contributors to suggest and implement static analysis tools that would improve code security and quality. We need better feedback loops between analysis authors and developers and it starts with increasing use of analysis tools.

3. Revisit and Improve Results. Static analysis tools have a well-earned reputation for being noisy and annoying. And as Google learned in their experiments, noisy analyzers make developers ignore tools and their results. At Google, where they consider any bug not fixed by developers a false positive, any analyzer with 10% or more false positives (i.e. bugs not fixed), would be pulled and reworked. That process made it easy for them to allow any developer at Google to write their own analyzer they could share with the rest of the company. Open-source maintainers should ensure they develop a strong feedback loop with their code analysis partners, so when tools are too noisy, they can be tuned or removed. A properly implemented static analysis solution should be fairly quiet, so when it raises issues as code review comments, developers listen and engage.

To get started, learn about the tools available, think about what’s important to your project, and put together a plan and prioritization of tools you want to use. Focus on open-source tools that can be integrated into CI/CD pipelines, either directly, or through a commercial platform that is ideally free for open source, and integrates a broad range of open-source tools. Whatever path you go, focus on the developer experience, implement a broad range of tools, and continually monitor your tools and tune the noise out. Doing so will keep your contributors happy, and actively continuing to write great code for your project.

Credit: Source link