Download Unicorn Signals App

Powered By EquityPandit
 Signals, Powered By  EquityPandit
WORLD

OpenAI and Anthropic Collaborated with US AI Safety Institute

It conducts interviews, poses questions, records, evaluates, and scores candidates based on performance.

The US government (US AI Safety Institute) has collaborated with AI startups OpenAI and Anthropic to assess their technologies for safety. The agreement aims to address AI risks by working alongside the AI Safety Institute and improving standards for responsible AI development in response to growing regulatory pressures.

The US government has announced deals with prominent artificial intelligence startups OpenAI and Anthropic to assist in testing and assessing their upcoming technologies for safety.

As part of the agreements announced on Thursday, the US AI Safety Institute will be granted early access to the companies’ major new AI models to evaluate their capabilities and potential risks and collaborate on methods for mitigating any issues.

Operating under the Department of Commerce’s National Institute of Standards and Technology (NIST), the AI Safety Institute is positioned to receive inputs from the UK’s AI Safety Institute regarding potential safety enhancements. 

Previously, Anthropic had tested its Sonnet 3.5 model in cooperation with the UK AI Safety Institute before the technology’s release. The US and UK organisations have already expressed their intention to collaborate on implementing standardised testing.

Elizabeth Kelly, the director of the AI Safety Institute, emphasised the importance of safety in driving innovative technological breakthroughs. She views these agreements as just the beginning, marking an important milestone as they strive to guide the future of AI responsibly.

OpenAI’s Chief Strategy Officer, Jason Kwon, strongly supports the US AI Safety Institute’s mission and looks forward to collaborating to establish best practices and standards for AI models’ safety. He believes the institute is pivotal in defining US leadership and ensuring the responsible development of artificial intelligence.

Jack Clark, co-founder and head of policy at Anthropic, highlighted the significance of expanding the capacity to test AI models effectively. He stressed the importance of safe and reliable AI in maximising technology’s positive impact and sees this as a way to identify and address risks, furthering responsible AI development. 

He was proud to contribute to this crucial work and set new benchmarks for safe and dependable AI.

The US AI Safety Institute was established in 2023 as part of the Biden-Harris administration’s Executive Order on AI. It is responsible for formulating testing, evaluations, and guidelines for responsible AI innovation.

Tired of guessing stocks to trade in daily?
Unicorn Signals empowers you with powerful tools like daily stock scans for Intraday, Swing & Investing, Market Predictions and much more. Download the Unicorn Signals app today and take control of your investments!

Get Daily Prediction & Stocks Tips On Your Mobile