An analysis of SAST and SCA vulnerability remediation patterns across 50,000+ actively developed repositories and 400+ organizations.
Every organization in this dataset uses security scanning tools. The gap between top performers and everyone else isn't detection. It's what happens after a finding surfaces. We analyzed anonymized remediation data from 400+ Semgrep-enabled organizations across the full 2025 calendar year, segmenting them into two cohorts: Leaders (top 15% by fix rate) and the Field (the remaining 85%). The performance gap holds across every severity level, vulnerability class, and package ecosystem.
This report breaks down what Leaders do differently: which capabilities they configure, when fixes happen, what ages into permanent backlog risk, and where most remediation programs break down.
Fix rate and mean time to remediate (MTTR) benchmarks across Leaders and the Field, broken down by SAST, SCA, and severity level. Use these numbers to assess where your vulnerability program stands.
PR-detected findings resolve 9x faster than findings caught in full scans. This section quantifies the shift-left effect, identifies where the developer context advantage is largest, and measures fix rate lift across blocking rules, cross-file taint analysis, SCA reachability data, and custom rules. Leaders and the Field have access to the same features. How they use them is not the same.
OWASP Top 10 and CWE-level fix rate breakdowns show where remediation stalls and why. Authentication failures and cryptographic failures show the largest performance gaps, not because teams ignore them, but because they're genuinely harder to fix.
Findings that age past 90 days are significantly less likely to ever be resolved through normal workflow. This section identifies which OWASP categories and CWEs are most likely to sit in backlogs indefinitely, and what high-performing teams do before they reach that threshold.
Frequently asked questions about SAST and SCA vulnerability remediation
In our dataset, the median organization fixes 16.8% of SAST findings. Top-performing teams, the top 15% by fix rate, reach 39.6%. If your critical SAST fix rate is below 50%, the data suggests a workflow problem between triage and developer action, not a detection problem.
Leaders in this dataset resolve PR-detected SAST findings in 4.8 days on average. The same class of finding caught in a full scan takes 43 days, a 9x gap. For SCA findings, Leaders average 32.9 days MTTR versus 41.7 days for the Field.
SAST (Static Application Security Testing) findings are vulnerabilities in code your team writes, and someone has to change the code to fix them. SCA (Software Composition Analysis) findings are vulnerabilities in third-party dependencies, where the fix is typically a version upgrade, but the path can be blocked by breaking changes, missing patches, or deeply embedded dependencies. Remediation dynamics differ significantly between the two.
The most common reasons: findings aren't surfaced in the developer workflow, developers lack enough context to act on them, or there's no clear owner. After 90 days, findings tend to represent either accepted risk, external blockers (no patch available, breaking change risk too high), or quiet neglect. Authentication and Injection findings have the highest average age among stale backlog items, at 262 and 218 days respectively.
In this dataset, Composer (PHP) leads at a 47.3% fix rate with a 101-day MTTR, driven partly by high-volume packages with clear upgrade paths. PyPI (Python) and Maven (Java) follow. NuGet (C#) has the lowest fix rate at 22.8%. NPM (JavaScript) sits at 26.5% with consistent rates across severity levels, suggesting ecosystem-wide friction rather than prioritization issues.
This report is written for AppSec engineers, security leads, and engineering managers responsible for vulnerability remediation programs. It contains specific, data-backed benchmarks for SAST fix rates, SCA fix rates, mean time to remediate, and feature effectiveness, organized so you can identify your own program's gaps and prioritize changes with the highest measurable impact.
The dataset covers organizations with 480+ actionable findings and repositories with 12+ pull requests across the 2025 calendar year, focusing the analysis on actively developed codebases under real remediation pressure.