Biden Administration's Plans for AI Safety Meeting
After the U.S. elections, the Biden administration is set to host a significant international AI safety meeting in San Francisco on November 20 and 21. This meeting will bring together government scientists and AI experts from at least nine countries and the European Union, to focus on developing AI technology safely.
Previous Collaborations and Future Goals
This meeting follows a prior AI Safety Summit in the United Kingdom, where delegates committed to working together to reduce the risks linked to advances in AI. According to U.S. Commerce Secretary Gina Raimondo, this will be the first working session after earlier summits, aiming to move forward with research and testing of AI technologies.
Key Discussion Topics
A major topic on the agenda will be the rise in AI-generated fake content and the difficulty in determining when an AI system becomes too powerful or dangerous, thus requiring regulations. Raimondo stressed the critical need for setting standards to manage the risks of synthetic content and harmful AI usage, highlighting the benefits if these risks are effectively addressed.
Technical Collaboration and Broader Summits
The San Francisco event is intended as a technical collaboration focusing on safety measures, building up to a more comprehensive AI summit planned for February in Paris. This event will occur following a U.S. presidential election between Vice President Kamala Harris and former President Donald Trump. The Commerce Department and the State Department are among the agencies hosting this meeting.
Global Participation and Absence of China
A significant network of national AI safety institutes from countries including the U.S., UK, Australia, Canada, France, Japan, Kenya, South Korea, Singapore, and the European Union will participate. However, China is notably absent. Raimondo noted the universal nature of AI risks, such as in nuclear weapon systems and bioterrorism, suggesting that global consensus on these issues should be possible.
Diverse Government Approaches to AI Regulation
Although multiple governments agree on safeguarding AI technology, their strategies differ. The EU has implemented a comprehensive AI law imposing strict regulations on high-risk applications. Previously, President Biden issued an executive order mandating developers of advanced AI systems to disclose safety test results to the government and tasked the Commerce Department with establishing safety standards.
Tech Companies and AI Regulation
San Francisco-based OpenAI provided early access to its latest AI model, named o1, to the U.S. and UK national AI safety institutes. While this model is praised for its complex reasoning, it poses medium risk concerning weapons of mass destruction. The Biden administration has urged AI firms to voluntarily test their advanced models before public release. However, Raimondo argued that voluntary measures might not suffice, suggesting that legislation may be required.
California Legislation on AI and Deepfakes
Tech companies acknowledge the need for AI regulation but are cautious about proposals that could stifle innovation. In California, new laws have been enacted to combat political deepfakes ahead of the 2024 election, though a more controversial bill aimed at regulating extremely powerful AI models has yet to be approved.