**Advancing Responsible Practices in Artificial Intelligence: A Collective Commitment**
We are excited to announce that we are teaming up with other prominent AI companies to jointly pioneer responsible practices in the development of artificial intelligence. Today marks a significant milestone in bringing the industry together to ensure that AI is beneficial to everyone. These commitments will support the efforts of the G7, the OECD, and national governments to maximize the advantages of AI while mitigating its risks.
**Applying AI to Tackle Society’s Greatest Challenges**
For over a decade, we have been dedicated to advancing AI technology. In 2017, we shifted our focus to becoming an “AI-first company.” Today, AI powers various Google services that millions of people use daily, including Google Search, Translate, Maps, and more. Additionally, we are leveraging AI to address societal issues, such as forecasting floods, reducing stop-and-go traffic to cut carbon emissions, enhancing healthcare by accurately answering clinical questions, and contributing to the detection and screening of diseases like breast cancer.
**Promoting Safe and Secure AI Systems**
While improving services with AI is crucial, our priority is ensuring the safety and security of these systems. We design our products with a secure-by-default approach, and our approach to AI is no exception. To enhance AI system security in organizations, we recently introduced our Secure AI Framework (SAIF). Moreover, we have expanded our bug hunters programs, including the Vulnerability Rewards Program, to encourage research on AI safety and security. We subject our models to adversarial testing to minimize risks. In addition, our Google DeepMind team is at the forefront of developing AI models that communicate in safer ways, preventing misuse, and designing systems with a focus on ethics and fairness. We are committed to sustaining these efforts and participating in further red teaming exercises, including one at DefCon next month.
**Building Trust in AI Systems**
Recognizing the potential amplification of societal challenges by new AI tools, such as misinformation and unfair bias, we published a set of AI Principles in 2018. These principles serve as guidelines for our work. We established a dedicated governance team to ensure that ethical reviews are conducted on new AI systems, while also addressing bias and incorporating privacy, security, and safety. Additionally, our Responsible AI Toolkit assists developers in responsibly pursuing AI. We will continue striving to build trust in AI systems and share regular progress reports on our endeavors.
When it comes to content, we are taking proactive steps to promote trustworthy information. Soon, we will integrate watermarking, metadata, and other innovative techniques into our latest generative models. Additionally, Google Search will feature an “About this image” tool, providing context about where an image originally appeared online. Effectively addressing AI-generated content necessitates industry-wide solutions, and we enthusiastically look forward to collaborating with others, including the synthetic media working group of the Partnership for AI.
**Building Responsible AI Together**
Acknowledging the complexity of AI, we understand that no single entity can have a complete grasp on its development. Therefore, we are proud to endorse these commitments alongside other leading AI companies. We pledge to continue working together by sharing information and best practices. Collaborative groups such as the Partnership for AI and ML Commons are already spearheading critical initiatives, and we eagerly anticipate further endeavors to foster responsible development in new generative AI tools.