Leading AI Firms Band Together in Safety Consortium to Tackle Risks, US Government Announces

On Thursday, the Biden administration announced the formation of the US AI Safety Institute Consortium (AISIC), a significant new initiative joining over 200 members, including tech giants and key industry players, to foster the safe development and use of generative AI.

Commerce Secretary Gina Raimondo revealed that the consortium encompasses OpenAI, Google, Anthropic, Microsoft, Meta Platforms, Apple, Amazon.com, Nvidia, Palantir, Intel, JPMorgan Chase, Bank of America, and other leading corporations like BP, Cisco Systems, IBM, Hewlett Packard, Northrop Grumman, Mastercard, Qualcomm, Visa, along with prominent academic and government bodies.

“The US government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” Raimondo said in a statement.

Housed under the US AI Safety Institute (USAISI), this group aims to implement the directives of President Biden’s AI executive order from October, focusing on establishing guidelines for AI safety measures such as red-teaming, risk management, and watermarking synthetic content. The initiative, which also addresses cybersecurity and other significant risks, marks a significant stride towards regulating AI’s development, amidst growing concerns over its potential societal impacts. Despite the administration’s proactive stance, legislative efforts to regulate AI at the congressional level have yet to materialize.

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape