in

Experience the Next-Level AI-Generated Election Content: Uncover Network Unpreparedness



**Platforms Struggle to Address AI-Generated Misinformation**

Earlier this year, Chicago mayoral candidate Paul Vallas faced a deepfaked ad that misrepresented his political positions in the city’s 2023 mayoral race. The video, which appeared authentic, was widely shared before the Vallas campaign denounced it as an AI-generated fake. This incident raises the important question of how social media platforms like Facebook and Twitter will mitigate the spread of deepfakes and other AI-generated misinformation in the upcoming presidential election.

Currently, the biggest social media platforms lack specific policies to address AI-generated content, whether political or otherwise. Facebook and Instagram, owned by Meta, rely on third-party fact-checkers to review flagged content potentially containing misinformation. These fact-checkers debunk content that includes faked, manipulated, or transformed audio, video, or photos. Reddit and YouTube also have policies against content manipulation and disinformation campaigns, with YouTube specifically removing election-related content that violates its misinformation policies.

As for Twitter, the company recently updated its synthetic and manipulated media policy. Tweets that may contain misleading information “may be” labeled, and Twitter will delete harmful content that is detrimental to individuals or communities. Media fabricated using AI algorithms will be subjected to closer scrutiny.

**TikTok’s Comprehensive AI-Generated Content Moderation Policy**

TikTok stands out among social media platforms with its comprehensive policy aimed at moderating AI-generated content. In March 2023, TikTok introduced a new “synthetic media policy” requiring creators to disclose their use of AI technology when publishing realistic-looking scenes generated or modified by AI. The platform explicitly bans AI-generated images that impersonate public figures for political endorsement. Since April, TikTok has been taking down content that violates this policy and providing warnings to accounts sharing unauthorized content.

However, simply disclosing the use of AI technology may not be enough to prevent the spread of misinformation. Bad actors will likely work to evade any policy or standard put in place. To address this, maintaining integrity teams is crucial. Twitter, in particular, may face challenges in the next election due to layoffs in teams that tackled misinformation.

**The Growing Problem of AI-Generated Misinformation**

The spread of misinformation has long been a problem for social media platforms, as demonstrated by the January 6 assault on the U.S. Capitol. The 2024 presidential election will be the first in which campaigns and their supporters will have access to powerful AI tools that can generate realistic-looking fake content rapidly. Experts warn against exaggerating the potential damage of AI-generated content in the next election, as overstating the success of disinformation campaigns using deepfakes could undermine trust in democratic systems.

Beyond social media, search engines such as Google also need to guard against AI-generated content. Google has taken steps to exclude manipulated content from highlighted results in knowledge panels and featured snippets. Google Ads prohibits the use of manipulated media, requiring advertisers to undergo an identity verification process and include in-ad disclosures of who is paying for the content. Google’s CEO, Sundar Pichai, announced new features such as ‘about this image’ to disclose when images found via search were AI-generated. Google will also introduce automatic watermarks on images and videos created with its generative models to help users identify synthetic content.

Watermarks are part of the Content Authenticity Initiative’s core demands, a coalition of over 200 organizations promoting the adoption of an industry standard for content authenticity. Additionally, Hany Farid, an electrical engineering and computer science professor at UC Berkeley, suggests making it mandatory for content creators to include a “nutrition label” disclosing how their content was created.

**Can Tech Companies Self-Regulate or Will Governments Need to Intervene?**

The key question remains whether tech companies can effectively self-regulate or if government intervention is necessary. Farid expresses the desire not to rely on governments but acknowledges the need for their assistance in addressing the issue. Striking the right balance between technology advancements and responsible regulation is crucial for safeguarding democratic systems and preventing the harmful consequences of AI-generated misinformation.

With the upcoming presidential election drawing near, social media platforms and search engines must adopt robust policies and strategies to combat the spread of AI-generated misinformation. Platforms need to prioritize the detection and removal of deepfakes and other manipulated content while promoting transparency and accountability among content creators. The collective effort of tech companies, researchers, and policymakers is vital in preserving the integrity of democratic processes in the face of evolving and increasingly sophisticated AI technologies.



Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Global market share of Korean unicorns experiences downturn

Bitcoin Soars to a 14-Month Peak Following Larry Fink’s Remarkable Insights