US Opens “Security Review” of AI Agents as Risk Moves Beyond Models
The federal goverment is asking industry to weigh in on the security risks of AI agents that act autonomously in real-world systems, signaling that agentic AI may soon become a material risk.
Read next
Models
·
Anthropic Formalizes Model Sabotage Surveillance Following California Safety Law
Anthropic has upgraded its AI safety reporting, detailing how it monitors and mitigates the risk that its own frontier models could autonomously disrupt internal systems.
New Updates Make AI Models Dangerously Good at Hacking, and Harder to Trust
New system cards from OpenAI and Anthropic issued last week reveal a step-change in cybersecurity capabilities alongside growing alignment concerns that increase catastrophic risk.