Tony Kim
Feb 24, 2026 20:48
Anthropic releases third model of Accountable Scaling Coverage, separating firm commitments from industry-wide suggestions after 2.5 years of testing.
Anthropic has launched the third iteration of its Accountable Scaling Coverage, marking a major restructuring of how the AI firm approaches catastrophic threat mitigation after two and a half years of real-world implementation.
The replace, revealed February 24, 2026, introduces three main adjustments: a transparent separation between what Anthropic can obtain alone versus what requires industry-wide motion, a brand new Frontier Security Roadmap with public accountability metrics, and obligatory exterior evaluate of Danger Studies underneath sure situations.
What Truly Modified
Probably the most notable shift? Anthropic is now overtly admitting that some security measures merely can’t be applied by a single firm. The earlier RSP’s higher-tier safeguards (ASL-4 and past) had been left deliberately obscure—seems that wasn’t simply warning, it was as a result of attaining them unilaterally could also be not possible.
A RAND report cited by Anthropic states that “SL5” safety requirements aimed toward stopping top-tier cyber threats are “at the moment not doable” and “will doubtless require help from the nationwide safety neighborhood.”
Quite than water down these necessities to make compliance straightforward, Anthropic selected to restructure solely. The brand new RSP now explicitly maps out two tracks: commitments the corporate will meet no matter exterior components, and proposals it believes your complete AI {industry} must undertake.
The Trustworthy Evaluation
Anthropic’s autopsy on RSP variations 1 and a pair of is refreshingly candid. What labored: the coverage pressured inner groups to deal with security as a launch requirement, and rivals like OpenAI and Google DeepMind adopted related frameworks inside months. ASL-3 safeguards had been efficiently activated in Might 2025.
What did not work: functionality thresholds proved way more ambiguous than anticipated. Organic threat evaluation supplies a telling instance—fashions now move most fast exams, making it exhausting to argue dangers are low, however outcomes aren’t definitive sufficient to show dangers are excessive both. By the point wet-lab trials full, extra highly effective fashions have already shipped.
The political setting hasn’t helped. Federal safety-oriented discussions have stalled as coverage focus shifted towards AI competitiveness and financial progress.
New Accountability Mechanisms
The Frontier Security Roadmap introduces particular, publicly-graded objectives together with “moonshot R&D” initiatives for data safety, automated red-teaming methods that exceed present bug bounty contributions, and complete data of all important AI growth actions—analyzed by AI for insider threats.
Danger Studies will publish each 3-6 months, explaining how capabilities, menace fashions, and mitigations match collectively. Exterior reviewers with “unredacted or minimally-redacted entry” will publicly critique Anthropic’s reasoning.
The corporate is already working pilots regardless of present fashions not but triggering the exterior evaluate requirement.
Trade Implications
This restructuring arrives as AI governance frameworks face growing scrutiny. California’s SB 53, New York’s RAISE Act, and the EU AI Act’s Codes of Apply have all begun requiring frontier builders to publish catastrophic threat frameworks—necessities Anthropic addresses via its current Frontier Compliance Framework.
Whether or not rivals comply with Anthropic’s lead on separating unilateral commitments from {industry} suggestions stays to be seen. The method primarily acknowledges that voluntary self-regulation has limits, whereas positioning the corporate to advocate for coordinated authorities motion with out showing to demand guidelines it may well’t comply with itself.
For the broader AI sector, Anthropic’s clear acknowledgment of what single corporations can not obtain alone could show extra influential than the technical coverage particulars themselves.
Picture supply: Shutterstock



















