Google, Meta, TikTok, OpenAI to Sign Agreement to Combat AI-Generated ‘Election Misinformation’

Google, Meta, TikTok, OpenAI to Sign Agreement to Combat AI-Generated ‘Election Misinformation’
CULVER CITY, CALIFORNIA - DECEMBER 20: The TikTok logo is displayed outside a TikTok offic
Mario Tama/Getty Images, AP Photo/Andy Wong

Major technology companies including Mark Zuckerberg’s Meta, OpenAI, Google and China’s TikTok are signing an agreement aimed at curbing the malicious use of artificial intelligence to meddle in elections.

AP News reports that at least six prominent technology firms intend to finalize an accord on AI election interference at the Munich Security Conference this week. The deal comes as over 50 countries prepare for significant national elections in 2024, with AI disinformation threats already emerging. For example, AI voice cloning robocalls sought to deter voting in New Hampshire’s primary election by impersonating President Joe Biden.

Mark Zuckerberg surrounded by guards

Mark Zuckerberg surrounded by guards ( Chip Somodevilla /Getty)

Sundar Pichai talks about AI

Sundar Pichai, chief executive officer of Google. Photographer: David Paul Morris/Bloomberg

The companies — which reportedly include Adobe, Google, Meta, Microsoft, OpenAI and TikTok — allegedly hope the agreement will guide joint efforts to halt the deceptive use of AI targeting voters. Details remain undisclosed, and it is unclear why the rest of the world would trust TikTok, accused of being a Chinese psyop on western nations, to make any good faith effort to preserve election integrity.

Elections worldwide face rising threats from deepfake media — falsely attributed images and recordings produced using AI generative models. Deepfakes could be weaponized to undermine candidates and mislead voters via propaganda. The companies aim to counter these risks through a unified stance against AI disinformation campaigns.

Meta spokesperson Lauren Dickerson stated the firms are “working jointly toward progress on this shared objective”. Critics argue self-regulation alone may prove insufficient to tackle AI fakery. Tougher legal frameworks and greater oversight could be required.

Breitbart News reported earlier this month that Mark Zuckerberg’s Meta will soon begin detecting and labeling AI-generated images on Facebook, Instagram and Threads, as well as requiring users to note when they post realistic AI video or audio as part of its efforts to prevent potential misinformation ahead of upcoming elections.

Meta President Nick Clegg announced the moves stating: “For those who are worried about video, audio content being designed to materially deceive the public on a matter of political importance in the run-up to the election, we’re going to be pretty vigilant. Do I think that there is a possibility that something may happen where, however quickly it’s detected or quickly labeled, nonetheless we’re somehow accused of having dropped the ball? Yeah, I think that is possible, if not likely.”

The new agreement notably excludes X/Twitter, which did not respond to media inquiries. With national elections in the U.S., UK, France, Nigeria and elsewhere next year, the stakes are high. The companies hope their accord will help safeguard electoral integrity in the face of accelerating technological change. But its success remains uncertain given the pace of malicious innovation.

Read more at AP News here.

The AP contributed to this report. 

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.


Pop Tart Inventor Dies at 96 Ahead of Netflix Movie Adaptation


Nolte: 78% of Fascist Democrats Want Trump Removed from Ballot