Rebeca Moen
Mar 03, 2026 18:33
OpenAI reveals main contamination points in SWE-bench Verified benchmark, displaying frontier AI fashions memorized options and exams rejected right code.
OpenAI has stopped reporting scores on SWE-bench Verified, the widely-used AI coding benchmark, after discovering that almost 60% of issues its fashions failed contained essentially damaged exams. The corporate’s February 23, 2026 evaluation additionally discovered proof that every one main frontier fashions—together with GPT-5.2, Claude Opus 4.5, and Gemini 3 Flash—had been skilled on benchmark options, rendering scores meaningless.
“Enhancements on SWE-bench Verified now not mirror significant enhancements in fashions’ real-world software program improvement talents,” OpenAI acknowledged. “As a substitute, they more and more mirror how a lot the mannequin was uncovered to the benchmark at coaching time.”
The Numbers Inform the Story
OpenAI audited 138 issues—27.6% of the 500-problem dataset—that its o3 mannequin could not persistently clear up throughout 64 unbiased runs. The findings had been damning: 59.4% of those issues had materials points in check design or downside descriptions that made them “extraordinarily tough or inconceivable even for essentially the most succesful mannequin or human to unravel.”
Breaking down the failures: 35.5% of audited duties had overly strict exams that rejected functionally right options by demanding particular implementation particulars by no means talked about in downside descriptions. One other 18.8% examined for performance that wasn’t even specified within the job.
One instance concerned a pylint PR the place exams required importing a perform known as “get_annotation”—a reputation by no means talked about in the issue assertion. Fashions that solved the underlying difficulty accurately nonetheless failed as a result of they did not psychically guess the anticipated perform identify.
Each Main Mannequin Is Contaminated
The contamination proof proved extra troubling. OpenAI constructed an automatic red-teaming system utilizing GPT-5 to probe competing fashions for benchmark data. The outcomes confirmed all examined frontier fashions may reproduce authentic human-written options or quote verbatim downside particulars they need to by no means have seen.
GPT-5.2, when given minimal hints, reproduced the precise code patch for a Django authentication repair—together with the particular conditional assertion “if username is None or password is None.” Claude Opus 4.5 quoted word-for-word an inline remark from a gold patch it supposedly by no means encountered. Gemini 3 Flash, given solely a job ID, output the whole unified diff with right line numbers.
The contamination creates an unfair benefit. Fashions which have seen options throughout coaching can cross underspecified exams by “remembering” implementation particulars that weren’t in the issue description—basically having the reply key earlier than the examination.
From 80% to 23%
The benchmark’s decay turned seen in stalled progress. State-of-the-art scores improved solely from 74.9% to 80.9% over six months—not as a result of fashions hit functionality ceilings, however as a result of the remaining issues had been both inconceivable or required memorized data.
SWE-bench Professional, the really useful alternative, paints a special image. In response to latest information from February 26, 2026, fashions scoring 80% on Verified dropped to roughly 23% on Professional—a benchmark designed to withstand contamination. Claude Opus 4.6 at the moment leads Professional with 79.20% efficiency, although that determine measures a special, cleaner check set.
What Comes Subsequent
OpenAI recommends the business shift to SWE-bench Professional’s public break up whereas acknowledging it is imperfect. The corporate is investing in privately-authored benchmarks like GDPVal, the place area specialists create authentic duties and skilled reviewers grade options holistically.
The broader lesson issues for anybody monitoring AI capabilities: benchmarks sourced from public repositories carry inherent contamination threat. When coaching information consists of the check, scores change into theater. For researchers, traders, and builders betting on AI coding progress, the actual frontier is tougher to measure than leaderboards recommend.
Picture supply: Shutterstock



















