Artificial intelligence platform Anthropic released a framework earlier this week aimed at mitigating “catastrophic risks” from developing AI technologies—specifically targeting man-made perils tied to chemical, biological, radiological, and nuclear (CBRN) threats that have been vexing markets and regulators in recent years.
Anthropic’s Proposed Frontier Model Transparency Framework outlines standards for “frontier“ AI models, which are cutting-edge, large-scale, general-purpose AI systems.
Central to the proposal is a “Secure Development Framework” (SDF) requirement that argues that companies document, test, and publish how their AI models mitigate CBRN and autonomous harm risks prior to public release.
Key features of Anthropic’s proposals include:
- Compliance: Companies must identify responsible officers, retain SDF records for five years, and publish public-facing certifications.
- Transparency: System cards documenting model testing, evaluation results, and mitigations must accompany each new deployment.
- Enforcement : False claims around SDF compliance could trigger civil penalties.
The framework arrives as both private sector and government stakeholders escalate focus on AI's dual-use risks.
A report issued by the The Department of Homeland Security and defense researcher Rand earlier this year flagged generative AI’s potential to lower the barriers to creating chemical and biological agents.
According to RAND, “AI has made significant contributions to synthetic biology and synthetic chemistry... as models and data become more accessible... they may become useful for actors intending to use science and technology to harm humankind.”
Additionally, the focus on AI’s impact on CBRN risks comes as insurers and regulators have been debating the role of private markets mitigating the threat.
The US Treasury Department’s said last year that there was growing concern over man-made perils beyond conventional terrorism, including NBCR (nuclear, biological, chemical, or radiological) threats and catastrophic cyber incidents.
Treasury officials noted that private market solutions have not kept pace with these evolving risks: “The insurance sector has increasingly adopted exclusions regarding catastrophic loss events, which likely reflects the industry’s increased awareness of potential systemic risk related to actions by state and state-supported actors.”
Anthropic’s announcement reflects a broader technology industry effort to get ahread regulatory action. However, as RAND’s report says, most U.S. AI regulations currently lag behind private sector tech progress, especially at the intersection of AI and WMD threats.
The report also stresses that “communication, coordination, and collaboration among the U.S. government, the private sector, and international stakeholders will be vital to develop formal AI regulations.”
Insurance industry and risk professionals may see echoes of this approach in the terrorism insurance sector. Under TRIP, coverage for NBCR-related losses remains one of the most complex and least consistently insured categories, even two decades after the program’s creation. According to Treasury report, “total per loss reinsurance limits purchased for NBCR losses subject to TRIP remain materially lower than other terrorism or natural catastrophe categories.”