in

FTC Steps in to Regulate AI Safety Competition Amid OpenAI and Anthropic Rivalry



**FTC Investigates OpenAI’s ChatGPT Over AI Safety Concerns**

The Federal Trade Commission (FTC) has taken action to ensure AI safety by investigating OpenAI’s ChatGPT. Large Language Models like OpenAI’s GPT-4 have been known to “hallucinate,” or generate false information, in response to user queries. The FTC is particularly concerned about the potential reputational harm and privacy risks associated with this behavior. There have already been instances of ChatGPT providing incorrect and damaging information about individuals, leading to defamation lawsuits. To address these concerns, the FTC has requested records from OpenAI regarding their efforts to mitigate and prevent hallucinations.

**FTC’s Investigation into ChatGPT’s “Hallucinations”**

As part of its role in protecting consumers, the FTC is focused on the risks posed by OpenAI’s ChatGPT. The agency is investigating how OpenAI is addressing the issue of “hallucinations” or the generation of false information by large language models. These false statements can have serious consequences, including reputational harm and privacy breaches. The FTC has requested information from OpenAI on any measures taken to correct or mitigate this issue.

**Reputational Harm and the FTC’s Jurisdiction**

The FTC’s investigation into potential reputational harm caused by ChatGPT has sparked a debate over the agency’s jurisdiction in this matter. Some argue that reputational harm falls within the realm of speech regulation and is beyond the authority of the FTC. Adam Kovacevich, founder of Chamber of Progress, suggests that such matters should be dealt with as speech regulation rather than consumer protection. However, FTC Chair Lina Kahn has expressed her commitment to enforcing fair AI competition, which includes addressing deceptive practices by upstream firms like OpenAI.

**OpenAI’s Response and Safety Measures**

OpenAI, in response to the FTC investigation, is creating a dedicated “superalignment” team to manage the risks associated with superintelligence AI. The company recognizes the potential disempowerment of humanity or even human extinction that could result from the development of such advanced AI. OpenAI is taking steps to ensure AI safety through reinforcement learning from human feedback (RLHF), which helps improve safety and reduce harm. The company aims to prioritize safety over other considerations, such as engagement and human-like conversation.

**Anthropic’s Approach to AI Safety**

In the face of the FTC’s investigation, Anthropic, another generative AI startup, has released its own chatbot called Claude 2. Anthropic aims to improve the safety of its chatbot by employing reinforcement learning from AI feedback (RLAIF). This approach involves having one AI model correct another based on a set of principles known as “Constitutional AI.” Anthropic’s principles draw from various sources, including the UN Declaration of Human Rights, safety best practices, and principles proposed by other AI research labs.

**FTC Inquiries and the Dialogue with Chatbots**

Intrigued by Anthropic’s approach, a conversation with Claude 2 ensued, revealing the underlying principles guiding its behavior. Claude explains that it does not have a formal constitution but instead aligns its behavior with Anthropic’s guidelines to be helpful, harmless, and honest. These guidelines include avoiding harm to users and unfairly damaging people’s reputations. While the “Don’t be evil” principle is a prominent feature, Anthropic seeks to instill a broader commitment to aligning AI behavior with human values.

**Shared Concerns and Existential Doomerism**

Both OpenAI and Anthropic share concerns about the potential dangers posed by AI. Despite their different feedback methodologies and sets of principles, both companies acknowledge the existential risks associated with their creations. Journalist Kevin Roose, who visited Anthropic’s headquarters during the release of Claude 2, observes the prevailing sense of dread surrounding these technologies. While critics question the need to build dangerous AI systems to understand their risks, companies like OpenAI and Anthropic hope to pave the way for broader adoption of safety measures in the tech industry.

**Motivations Driving AI Development**

While monetary gain is a possible outcome, the primary motivation behind the competition among intelligent engineers in AI development appears to be the desire to be at the forefront of the field. OpenAI, Anthropic, and other tech companies strive to be recognized as leaders in AI development. This drive for superiority fuels the efforts to create the most advanced technologies and secure a prominent position in the industry.



Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Roquetes: A Celebration of Unicorns’ Pride

Taylor Swift featured in the recent Federal report, courtesy of The Eras Tour: An exquisitely talented singer who captivates both music enthusiasts and government officials.