Skip to main content

Semgrep Assistant metrics and methodology

Metrics for evaluating Semgrep Assistant's performance are derived from two sources:

  • User feedback on Assistant recommendations within the product
  • Internal triage and benchmarking conducted by Semgreps security research team

This methodology ensures that Assistant is evaluated from both a user's and expert's perspective. This gives Semgrep's product and engineering teams a holistic view into Assistant's real-world performance.1

User feedback

User feedback shows the aggregated and anonymized performance of Assistant across more than 1000 customers, providing a comprehensive real-world dataset.

Users are prompted in-line to "thumbs up" or "thumbs down" Assistant suggestions as they receive Assistant suggestions in their PR or MR. This ensures that sampling bias is reduced, as both developers and AppSec engineers can provide feedback.

Results as of Jan 10, 2024:

Customers in dataset1000+
Findings analyzed250,000+
Average reduction in findings220%
Human-agree rate92%
Median time to resolution22% faster than baseline
Average time saved per finding30 minutes

Internal benchmarks

Internal benchmarks for Assistant use a process in which a rotating team of security engineers conduct periodic reviews of findings and their Assistant generated triage recommendations or remediation guidance. This is the same process used to evaluate Semgrep's SAST engine and rule performance.

Internal benchmarks for Assistant run on the same dataset used by Semgrep's security research team to analyze Semgrep rule performance. This means the dataset is not prone to cherry-picked findings that are easier for AI to analyze, and accurately represents real-world performance across a variety of contexts.

Findings analyzed2000+
False positive confidence rate396%
Remediation guidance confidence rate480%

Footnotes

  1. Learn more about how Semgrep achieved these numbers in How we built an AppSec AI that security researchers agree with 96% of the time.

  2. The average % of SAST findings that Assistant filters out as noise.

  3. False positive confidence rate measures how often Assistant is correct when it identifies a false positive. A high confidence rate means users can trust when Assistant identifies a false positive - it does not mean that Assistant catches all false positives.

  4. Remediation guidance is rated on a binary scale of "helpful" / "not helpful".


Not finding what you need in this doc? Ask questions in our Community Slack group, or see Support for other ways to get help.