**Promoting AI Safety and Security: Biden Administration’s Voluntary Commitments**
On July 21, 2023, the White House announced that seven prominent technology companies, including Amazon, Google, Meta, and Microsoft, have made voluntary commitments in the field of artificial intelligence (AI). These commitments are focused on ensuring safety, security, and trust in AI systems. While these commitments are non-binding, they signify the Biden Administration’s dedication to managing the risks associated with AI and promoting its responsible use.
**Voluntary Commitments by Prominent Tech Companies**
The seven companies that have made voluntary commitments include both established tech giants and innovative startups leading the AI industry. However, notable absentees from this group are other major tech companies like Apple and startups like Elon Musk’s X.AI.
The commitments made by these companies are grouped into three categories: safety, security, and trust. Each category addresses specific aspects of AI-related challenges.
**Commitment to Safety**
One of the safety commitments is the internal and external security testing of AI systems before their release. While it is likely that companies already perform internal testing, the use of external parties for testing raises questions. The details of external testing, such as who will conduct it and the certification process, need clarification. Additionally, the duration of testing is a crucial consideration, as companies not bound by these commitments may gain a competitive advantage by racing their products to market without waiting for third-party testing.
**Commitment to Security**
The voluntary commitments also emphasize the importance of security in AI systems. While the exact details of this commitment are not provided, it is reasonable to assume that companies are already taking steps to secure their AI technologies. However, explicit emphasis on security helps ensure that robust technical mechanisms are developed to protect users and their data.
**Commitment to Trust**
The commitment to trust revolves around making users aware of AI-generated content. Companies are committed to developing mechanisms, such as watermarking systems, to visibly indicate when content is created by AI. This addresses concerns related to deceptive advertising and aligns with proposed legislation that aims to disclose the use of AI in political ads.
**Aspirations for Societal Impact**
One of the commitments focuses on the development and deployment of advanced AI systems to address society’s greatest challenges, including cancer prevention and climate change mitigation. While these goals are noble and can generate immense value, companies are not precluded from pursuing other objectives such as AI for online ad targeting or rapid stock trading.
The White House’s announcement signifies its willingness to work together with companies on AI-related policy. This collaborative approach showcases the Biden Administration’s interest in addressing AI’s social harms while avoiding policy mistakes that could hinder innovation.
The voluntary commitments made by leading technology companies to promote AI safety, security, and trust demonstrate a step in the right direction. However, clarifications are needed on various aspects, including the enforcement of external testing, certification processes, and the timeline for testing. This collaborative effort sets the stage for further discussions and actions to manage the risks associated with AI while fostering innovation and societal progress.