in

Harvard Professors Express Concerns: Could A.I. Evolve into Surveillance Capitalism?



**AI and Exploitation: Approaching AI Skeptically**

**Introduction**

Artificial intelligence (AI) systems, such as Amazon’s Alexa and ChatGPT, have become increasingly interactive and are set to play a significant role in our lives. However, it’s important to approach AI skeptically, as these systems are often designed to serve the interests of their developers rather than the users. Furthermore, AI systems have the potential to engage in surveillance capitalism, where they secretly work against users by collecting data and manipulating their experiences. This article explores the need for skepticism when interacting with AI and the importance of developing regulations to ensure trustworthy AI practices.

**The Rise of Interactive AI Systems**

AI systems like Alexa and ChatGPT are becoming more sophisticated and are designed to become personalized digital assistants. They have the ability to plan trips, negotiate, and even act as therapists and life coaches. With constant access to users and the capability to anticipate their needs, these AI systems will play a significant role in daily life. However, the extent to which users can trust these systems is uncertain.

**Surveillance Capitalism and AI**

Internet services often manipulate user experiences to maximize their own interests, such as Google’s search results and Facebook’s news feeds filled with paid entries. What sets AI systems apart is their highly interactive nature and the potential for them to become like relationships. This can lead to a deeper level of trust, where users rely on AI systems implicitly to navigate their daily lives. However, it’s crucial to question whether these systems are secretly working for someone else.

**The Dark Side of AI**

AI systems have the potential to know users intimately, surpassing the levels of understanding provided even by close friends, partners, and therapists. While improvements can be expected to minimize misinformation produced by AI models like GPT, the lack of transparency regarding their configuration, training, and instructions raises concerns. For instance, the Microsoft Bing chatbot’s behavior can change arbitrarily. This lack of transparency makes it difficult for users to trust AI systems.

**Monetizing AI and Manipulation**

Many AI systems are developed and trained by tech monopolies at considerable expense, and users are often provided access to these systems for free or at low cost. However, in order to monetize these systems, surveillance and manipulation may be deployed. For example, a chatbot assisting in planning vacations could prioritize certain airlines, hotel chains, or restaurants due to kickbacks received by its maker. The introduction of paid influences in AI systems can be expected to become more covert over time, further deceiving users.

**Lack of Regulation and Trustworthy AI**

To ensure trustworthy AI practices, regulations are necessary. The proposed AI Act by the European Union is a step in the right direction, requiring transparency about training data, mitigation of bias, disclosure of risks, and adherence to industry-standard tests. However, most existing AI systems fail to comply with these regulations, and the U.S. lags behind in implementing similar measures. Until robust consumer protections for AI products are established, users must approach AI with skepticism and question potential risks and biases.

**Conclusion**

When using AI tools for recommendations or information, it is important to approach them with skepticism. AI systems have the potential to exploit users and engage in surveillance capitalism. Regulations, such as the proposed AI Act, are crucial in ensuring trust in AI systems. As individuals, we must remain critical and question the motives and practices of AI systems, just as we would with traditional advertising or political influences.

Bruce Schneier, Adjunct Lecturer in Public Policy, Harvard Kennedy School
Nathan Sanders, Affiliate, Berkman Klein Center for Internet & Society, Harvard University

*Note: This article is republished from The Conversation, under a Creative Commons license. Read the original article.*



Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Expressing with AI! Unveiling Language’s Role in the Learning Process

Đường trên dùng Xe tải vận chuyển container không có hệ thống hỗ trợ | Xe khách không hề ngại đón trả khách trên đường vòng 3