OpenAI and Anthropic Sign Agreements with the US Government for AI Research and Testing

OpenAI

In a groundbreaking move, AI innovators OpenAI and Anthropic have entered into historic agreements with the U.S. government to advance the research, testing, and evaluation of their artificial intelligence models. This collaboration, announced by the U.S. Artificial Intelligence Safety Institute on Thursday, underscores the growing emphasis on the safe and ethical deployment of AI technologies.

These pioneering agreements come as regulatory bodies intensify their scrutiny of AI's potential risks and benefits. California legislators are on the brink of voting on a pivotal bill that could significantly shape the development and deployment of AI within the state.

“Ensuring that AI is safe and trustworthy is essential for its positive impact on society. By collaborating with the U.S. AI Safety Institute, we are leveraging their vast expertise to rigorously test our models before they are widely deployed,” stated Jack Clark, Co-Founder and Head of Policy at Anthropic, a company backed by tech giants Amazon (AMZN.O) and Alphabet (GOOGL.O).

As part of these agreements, the U.S. AI Safety Institute will gain access to major new models from both OpenAI and Anthropic, both before and after their public release. This access will facilitate collaborative research aimed at evaluating the capabilities of these AI models, as well as identifying and mitigating associated risks.

Jason Kwon, Chief Strategy Officer at OpenAI, highlighted the importance of the partnership, stating, “We believe the institute plays a vital role in advancing U.S. leadership in the responsible development of artificial intelligence. Our collaboration aims to set a global standard for AI safety.”

Elizabeth Kelly, Director of the U.S. AI Safety Institute, echoed this sentiment, highlighting the agreements as a key milestone in the ongoing efforts to responsibly guide the future of AI. “These agreements mark the beginning of what we believe will be a long-term commitment to AI safety,” she said.

The U.S. AI Safety Institute, operating under the National Institute of Standards and Technology (NIST) within the U.S. Department of Commerce, will also engage with the U.K. AI Safety Institute, providing feedback and recommendations to improve the safety of AI models developed by these leading companies. This collaboration is a direct result of an executive order issued by President Joe Biden's administration last year, aimed at assessing both known and emerging risks associated with AI technologies.
Next Post Previous Post
No Comment
Add Comment
comment url