Open Source Security: Chaos, Collaboration, and the Cost of “Free”

Open source powers almost everything we build, but when something goes wrong, the fallout is never contained. In our last episode of Security Rulez, Dr Katie and former OWASP board chair Grant Ongers unpack why open source security is less about finding vulnerabilities and more about who actually has the time, context, and incentives to fix them. From supply chain incidents and compromised maintainers to the growing pressure of AI-driven vulnerability discovery, the conversation highlights a stark reality: developer time is now the scarcest security resource.

February 12th, 2026
Share

Open source runs your stack. That’s not a metaphor—it’s your CI, your frontend build, your container base images, your crypto libraries, and the transitive dependencies you forgot you had. In our latest Security Rulez episode Security Advocate Dr Katie Paxton-Fear sat down with security expert and former OWASP board chair Grant Ongers to discuss the challenges in Open Source security and the opportunities for teams to leverage an entire community to keep their software secure.


When it comes to open source security incidents, they don’t tend to stay “over there.” When maintainers get compromised, when a popular ecosystem gets hit with malware, or when a vulnerability disclosure lands on an understaffed project, you inherit the blast radius. In 2025, supply chain incidents and AI-driven vulnerability hunting made one thing painfully clear: In 2026 the limiting factor in open source security is no longer finding issues, it’s fixing them. And key to that issue is: who will do the remediation work?

Open source security isn’t “more secure” or “less secure”

There’s a tired debate that pops up every time open source gets breached:

  • “Open source is less secure because attackers can read the code.”

  • “Open source is more secure because more people can review the code.

Open source is auditable. That’s different from “audited.” The source being available doesn’t mean anyone is actually reviewing it, threat modeling it, fuzzing it, or maintaining a secure release process. In practice, security depends on governance:

  • Who can publish releases?

  • Are maintainer accounts protected?

  • Are commits reviewed?

  • Are builds reproducible?

  • Is there bandwidth to handle reports?

If the answers are “one person” and “not really,” the software can be open and still be brittle. But if the answer is a dedicated team of volunteers who care deeply about security and implement controls in their software and in their open source approach like signed commits, the open source software is likely to be significantly better than anything you could build and maintain yourselves!

The real bottleneck: developer time

Modern incidents in open source happen quickly: At that moment, the most scarce resource isn’t tooling. It’s people with the time and context to fix the problem quickly.

  • A maintainer is compromised.

  • A malicious release ships.

  • Hundreds of downstream projects inherit it automatically.

  • Everyone scrambles to identify, patch, and redeploy.

That’s why “we funded the project” doesn’t always solve the problem. Money helps, but it doesn’t create extra hours in the day, and it doesn’t magically create new maintainers who know the codebase well enough to land safe fixes fast.

And just in case open source volunteers felt too relaxed, AI has now come along and is changing open source security, as finding issues gets cheaper and faster, for security researchers and attackers. Which means maintainers get flooded with more reports than they can triage and they don’t have the confidence to use AI themselves due to worries about code quality and security.

There’s also a social dimension: if a large company points powerful AI research tooling at a volunteer-maintained project and drops a pile of findings without patches, the message is effectively:“Here’s more work for you. Good luck.”. And while that’s not in the spirit of open source, when attackers are leveraging AI too, they’re also able to find these vulnerabilities with the same efficiency, they can’t just be ignored. 

Even when the intent of researchers is good though, there is a significant impact for contributors, causing a remediation burden that the project simply can’t absorb on the same timeline as a vendor. We talked about this in detail in our recap of the discussions between FFMPEG and Project Zero.

“Free like a puppy”: you own what you ship

One of the simplest frames from the conversation is still the best one:

Open source is “free” the way a puppy is free.If the puppy pees on the floor, you’re cleaning it up. When you add an open source dependency, you’re making a bet:

  • On its maintainers

  • Its release hygiene

  • Its update cadence

  • Your ability to respond when it goes sideways

  • And your capacity to give back to that project when it does.

That’s not an argument against open source. It’s a reminder that “we didn’t write it” doesn’t mean “we don’t own the risk.”

The dev vs security conflict is the same problem in miniature

A lot of teams treat the open source ecosystem like an external “other.” But the exact same dynamics show up inside companies:

  • Security teams buy tools without developer input.

  • Developers get buried in findings without context or prioritization.

  • Engineers are evaluated on delivery; security is treated like interruption.

  • Security assumes “more findings = more security,” while Engineer assumes “more findings = less shipping.”

If you’ve ever heard “security is blocking us,” you’ve seen this pattern. The solution usually isn’t more policy. It’s better alignment: on incentives, on ownership,  and on what “acceptable risk” actually means for this product.

If you’re on a security team: stop “throwing findings over the wall”

Do these three things and you’ll see the relationship change fast:

  1. Co-own remediation with product owners. Every security fix should be a real unit of work: scoped, prioritized, and scheduled. If it matters, it belongs in planning like any other product quality work.

  2. Deliver fewer findings, but with better context. Triage, dedupe, and prioritize. If everything is urgent, nothing is.

  3. Offer help that reduces workload, not just instructions. The fastest way to earn trust is to take work off someone’s plate: draft the patch, write the test, validate the exploitability, or pair on the fix.

If you’re a developer: choose dependencies like you’ll have to support them (because you will)

  1. Know what you depend on and who maintains it. Popularity isn’t the same as sustainability. Look for active maintenance, clear release processes, and signs of security hygiene.

  2. Patch velocity beats perfect security. You don’t need a flawless dependency graph. You need the ability to update quickly when something breaks.

  3. Treat security as product quality, not a separate team’s problem. You don’t need to be a security expert—but you do need to own the quality of what you ship.

If your company benefits from open source: contribute time, not just money

If your business depends on open source, the best way to improve security isn’t just writing checks. It’s paying people to do the work that maintainers don’t have time to do: release hardening, documentation and reproducible builds, CI improvements and triage support for vulnerability reports. To quote the FFMPEG team, what open source needs is actually really simple: send patches.

But for many businesses the intensity of open source communities can be a bit intimidating, to help navigate your teams consider the Linux Foundation's Open Source Program Office book, which flags best practices in both how to use open source projects in your organization and how to give back to the community through sponsorship or commits.

We’re not going to “solve” open source risk by banning open source, or by flooding maintainers with AI-generated reports, or by treating dev teams like they’re failing a test. The practical goal is: fewer supply-chain surprises, faster patch cycles, and security work that doesn’t break collaboration.


About

semgrep logo

Semgrep enables teams to use industry-leading AI-assisted static application security testing (SAST), supply chain dependency scanning (SCA), and secrets detection. The Semgrep AppSec Platform is built for teams that struggle with noise by helping development teams apply secure coding practices.