in

“Breaking News: Microsoft Unveils Cutting-Edge Media Literacy Initiative”



Microsoft Launches Media Program to Combat Misinformation

With the rise of artificial intelligence (A.I.) and the proliferation of fake content on social media, it is becoming increasingly difficult to distinguish real news from misinformation. In response, Microsoft has partnered with the Trust Project, a nonprofit group of news outlets seeking to protect the integrity of the media, to launch a new media program aimed at combatting the spread of misinformation. The program involves attaching a credibility check page to online news via ads.

Trust Indicators

Under the program, Microsoft will now put ads on some online news articles that direct readers to the Trust Project’s “8 Trust Indicators.” The ads will appear on readers’ devices if they use Microsoft products and systems, including Outlook email. The page for the program explains the nonprofit’s eight tenets of credible journalism and tells readers what questions to ask about the media they consume before trusting it. For example, people reading an article should ask if details are given so that the sources can be checked and if sources are provided for each claim in investigative, in-depth or controversial stories.

The Threat of Disinformation

Disinformation, which differs from misinformation because it intends to deceive, has inflamed national crises in recent years, including the anti-vaccination movement during the COVID-19 pandemic and election denialism following the 2020 election. Conspiracy theories have also sparked radicalism and political violence.

Microsoft’s A.I. Ventures

The media literacy program comes after Microsoft invested $10 billion in buzzy startup Open AI, maker of the A.I. chatbot ChatGPT. The chatbot is used extensively for plagiarism, especially by students, and is known for producing “hallucinations,” or false information. Microsoft has also integrated A.I. into its search engine Bing by adding ChatGPT-like features to the service.

Controlling Misinformation

Microsoft’s A.I. ventures may be enabling the spread of misinformation even as its media trust program attempts to combat it. On June 5, European regulators pushed several tech companies, including Microsoft, Google, and Facebook-parent Meta, to label A.I.-generated content so that people who see it are not confused about its source. Chatbots and A.I. image generation services that produce realistic but fake visuals have made the fight against disinformation increasingly challenging. Some experts predict that 90% of all online content could be A.I. generated in the next few years.

Voluntary A.I. Code of Conduct

According to the EU, the U.S. and EU are collaborating on a voluntary A.I. code of conduct for industry players to sign. The code of conduct will set standards for transparency, privacy, and data quality.

Conclusion

As the threat of disinformation continues to increase, Microsoft’s media program may help consumers differentiate between real news and false information. However, with the increasing use of A.I. and the proliferation of fake content on social media, further measures may be needed to combat the spread of misinformation.



Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Sweetwood Ventures’ General Partner, Amit Kurz | Episode 839

“Unveiling the Origins of Private File Systems”