Anthropic proposes transparency framework for frontier AI development

Anthropic is calling for the creation of an AI transparency framework that can be applied to large AI developers to ensure accountability and safety. 

“As models advance, we have an unprecedented opportunity to accelerate scientific discovery, healthcare, and economic growth. Without safe and responsible development, a single catastrophic failure could halt progress for decades. Our proposed transparency framework offers a practical first step: public visibility into safety practices while preserving private sector agility to deliver AI’s transformative potential,” Anthropic wrote in a post

As such, it is proposing its framework in the hope that it could be applied at the federal, state, or international level. The initial version of the framework includes six core tenets to be followed. 

First, AI transparency requirements would apply only to the largest frontier model developers, allowing smaller startups creating models with low risk to be exempt. It doesn’t specify a particular company size here, and welcomes input from the start-up community, but says that in internal discussions at Anthropic, some example cutoffs could be companies with revenue of $100 million or less or R&D and capital expenditures of $1 billion or less. 

Second, frontier model developers should create a Secure Development Framework detailing how they assess and mitigate unreasonable risks, including creation of chemical, biological, radiological, and nuclear harms, in addition to harms caused by misalignment. 

Third, this Secure Development Framework should be disclosed to the public, so that researchers, government, and the public can stay informed about the models that are currently deployed. Sensitive information would be allowed to be redacted. 

Fourth, system cards and documentation should summarize testing and evaluation procedures, results, and mitigations. The system card should be deployed alongside the model and should be updated when the model is updated. Again, redaction of sensitive information from system cards could be allowed. 

Related posts