How your favorite social media apps are preparing for election conspiracies

How your favorite social media apps are preparing for election conspiracies

Voters wait in line to check in ahead of casting their ballots during the last day of early voting in Gwinnett County, Georgia on November 1, 2024. 

Nathan Posner | Anadolu | Getty Images

With Americans heading to the polls on Election Day, social media companies like Meta, TikTok, X and YouTube are under intense pressure to handle what’s expected to be a flood of disinformation, heightened by the rise of artificial intelligence.

It’s been a huge issue since the 2016 presidential election cycle, when foreign adversaries abused social platforms in an effort to sway the outcome. Most notably, Russian operatives flooded Facebook with posts promoting false information about Democratic nominee Hillary Clinton.

Meta says it’s invested more than $20 billion around safety and security for global elections since 2016, and has more recently deprioritized political content on Instagram and Threads. The company has also been working with fact-checkers, amplifying verified voting resources and labeling AI-generated content ahead of Election Day. 

There’s only so much the companies can do. Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, told reporters in an October briefing that foreign actors from Russia, Iran and China have managed to launch viral disinformation campaigns.

Russia was behind a fake video that showed a person ripping up ballots in Pennsylvania last month, according to a joint statement from CISA, the FBI and the Office of the Director of National Intelligence. The video amassed hundreds of thousands of views within hours after it was posted on Elon Musk‘s social media platform X. 

“This Russian activity is part of Moscow’s broader effort to raise unfounded questions about the integrity of the U.S. election and stoke divisions among Americans,” the statement said.

Foreign actors aren’t the only perpetrators. In late September, CNBC informed Meta about a series of Facebook posts containing misinformation on voting in North Carolina. On X, a beta feature in the “explore” section was spreading voter fraud conspiracy theories through the platform’s AI software last month, a report from NBC News found. And TikTok failed to catch ads containing false election information despite its ban on political advertising, according to an October report from Global Witness

“There’s a lot of information out there, and frankly, a firehose of disinformation,” Easterly said during the briefing.   

Here’s how social media companies have been preparing for Election Day. 

Meta

Chief Executive Officer (CEO) of Meta, Mark Zuckerberg testify before a Senate Judiciary Committee hearing on online child sexual exploitation in the Dirksen Senate Office Building on January 31, 2024, in Washington DC, United States.

Celal Gunes | Anadolu | Getty Images

More than 40,000 people are working on Meta’s election safety and security efforts, and the company says it works with 11 independent fact-checking partners in the U.S. They include PolitiFact, Reuters and USA Today. Notably, Meta is not working with The Associated Press this year, as an AP spokesperson previously told CNBC the news agency’s “fact-checking agreement with Meta ended back in January.” 

The company also dissolved a fact-checking tool that would have allowed news services like Reuters, as well as credible experts, to add comments at the top of questionable articles as a way to verify their trustworthiness, a change CNBC reported last year. That dissolution came as part of broad cost-cutting efforts beginning in 2022.  

On Facebook and Instagram, Meta said it’s adding fact-check labels to election content that’s been debunked. The reach of posts that are deemed false, altered or partly false by fact-checkers will be reduced. 

The company is also using in-app notifications to connect users with information about voter registration. If users search for something election related, they’ll see links to official information about how, when and where to vote, Meta said. The company has launched an official Voting Information Center that’s live on Facebook. 

In May, Instagram Chief Adam Mosseri announced third-party fact-checkers could officially review and rate content on the company’s Threads service. 

“Previously, we matched near-identical false content on Threads based on what was fact-checked on Facebook and Instagram,”  Mosseri said in a post on Threads. “Now fact-checkers can rate Threads content on its own.”

Content that violates Meta’s policies, including posts promoting electoral violence, voter interference and misinformation about how to vote will be removed, the company said.

WhatsApp users can message fact-checking organizations directly, and Meta will flag content that’s been forwarded through five or more chats. Meta said this will help users understand that the information didn’t come from a close contact and will slow down the spread of rumors.

The company said it’s placing visible watermarks on content generated with its Meta AI feature, as well as invisible watermarks that will be embedded within the image files. If Meta determines that AI-generated content presents a “particularly high risk of materially deceiving the public,” it could add a more prominent label. 

“Our integrity efforts continue to lead the industry, and with each election we incorporate the lessons we’ve learned to help stay ahead of emerging threats,” a Meta spokesperson told CNBC in a statement.

See also  Sam Altman tells OpenAI staff there's no plan for him to receive 'giant equity stake' in company

Meta allows political advertising on its platforms, but it prohibits ads that contain early claims of victory, ads that undermine the legitimacy of the election or ads that discourage Americans from voting. The company also blocks new electoral, political and social issue ads during the final week ahead of Election Day. Meta said Monday that the restriction period will be extended “until later this week.”

TikTok 

China could use social media app TikTok to influence the 2024 U.S. elections, U.S. Director of National Intelligence Avril Haines told a House of Representatives intelligence committee hearing on Tuesday.

Anadolu | Anadolu | Getty Images

TikTok said it expects to invest more than $2 billion in trust and safety this year, which includes election integrity, according to a September blog post. 

The company launched its U.S. Election Center in partnership with the nonprofit Democracy Works in January. The Election Center includes voting FAQs from official sources, and users are directed there when they engage with election content and searches, TikTok said. As of Sept. 4, TikTok’s Election Center had been viewed more than 7 million times. 

The company partners with fact-checking organizations so that it can label unsubstantiated content. It also utilizes “specialized misinformation moderators” with specific training and teams, TikTok said. The company is working with AP to make the real-time results of the election available within the app.

TikTok says it doesn’t permit the spread of misleading AI-generated content, which includes depictions of public figures endorsing specific political views. TikTok creators are also required to label realistic AI-generated content, and the company launched a tool to help.

The company doesn’t allow political advertising, and politicians and political parties can’t make money off of their TikTok accounts. 

Additionally, TikTok said it’s been working to identify covert influence attempts on the platform. In a May blog post, for instance, the company said it had identified 15 influence operations in the first four months of the year, and it removed more than 3,000 accounts associated with those operations. 

“We found that a majority of these networks were attempting to influence political discourse among their target audience, including in relation to elections,” TikTok said.

American lawmakers have long argued that TikTok’s foreign ownership poses a national security risk. Former President Donald Trump, the Republican nominee, attempted to ban the platform through an executive order in 2020. That effort failed, but the issue gained resonance in recent years as concerns intensified surrounding China’s heightened power.

President Joe Biden signed legislation in April that gave TikTok parent ByteDance nine months to find a buyer for the popular short-form video app, and a three-month extension if a deal is in progress. TikTok sued the U.S. government over the law in May. Litigation is ongoing, and the app’s future in the U.S. is still not clear.

TikTok declined to comment. 

Tesla CEO and X owner Elon Musk speaks during a rally for Republican presidential nominee and former U.S. President Donald Trump at Madison Square Garden, in New York, U.S., October 27, 2024. REUTERS/Carlos Barria

Carlos Barria | Reuters

X has been actively working with election officials, campaigns, law enforcement and security agencies ahead of the U.S. elections, the company’s global government affairs team said in an article posted to the platform in September. 

The company’s safety team proactively monitors activity across X to detect spam and deceptive accounts, the article said, and it will continue to lean on Community Notes submissions to correct or add context to posts that contain misinformation.  

“Whether they are state-affiliated entities engaged in covert influence operations or generic spam networks, we actively work to thwart and disrupt campaigns that threaten to degrade the integrity of the platform,” the company said.

Musk acquired the company, then called Twitter, for $44 billion at the end of 2022. He has since slashed more than 80% of its headcount, including steep cuts to the trust and safety team. In January of 2023, that team was down to fewer than 20 full-time employees. 

Musk has been a staunch supporter of Trump ahead of the election, donating millions of dollars to PACs supporting the Republican nominee and appearing alongside him on the campaign trail. In sharing dozens of posts a day on X, Musk regularly amplifies false election information to his more than 200 million followers. He falsely claimed that Democrats are trying to “import” voters into the U.S., for instance.

According to X’s civic integrity policy, people can’t use the platform to manipulate or interfere in elections or other civic processes. That includes sharing content that may lead to violence, suppress participation or mislead people about how to participate. 

X does allow political advertising across the platform. 

X told CNBC in a statement that the company is “actively enforcing” its content policies, which are “constantly being revised to address evolving threats, adversarial practices, and malicious actors.”

See also  Judge delays order in antitrust case requiring Google to open up its app store

YouTube

Omar Marques | Lightrocket | Getty Images

In a blog post late last month, YouTube said it’s been combating AI-generated election misinformation, highlighting trustworthy content and providing resources about registering to vote throughout the year. YouTube also rolled out new features ahead of Election Day to “connect voters with the information and context they need to stay informed,” the company said.

When users search for information about a federal election candidate, they’ll see a panel above the results that shows details like the candidate’s political party, a link to their channel and a click-through to Google Search. YouTube is owned by Google parent Alphabet. 

“Over the last few years, we’ve heavily invested in the policies and systems that allow us to support elections, not just in the U.S. but around the world,” a YouTube spokesperson told CNBC in a statement.

YouTube’s homepage will highlight information about how and where to vote, and users may see another panel that directs them to Google if they search “how to vote” or “how to register to vote.” Additionally, users will see a shelf of authoritative news channels on the homepage during Election Day, which will be available in both English and Spanish, the company said. 

As polls start to close, YouTube will add context about election results, including links to follow along in real time, under videos and at the top of election-related searches. The company will also temporarily pause ads relating to the election once the last polls close, according to the blog post.  

 “Over the last few years, we’ve heavily invested in the policies and systems that allow us to support elections, not just in the U.S. but around the world.”

YouTube will remove election-related content that violates its established guidelines, including videos that mislead voters, encourage interference, incite violence, promote conspiracy theories or threaten election workers, the company said. 

YouTube adds a label to AI-generated content on the platform, and adds “a more prominent label” to synthetic content about sensitive subjects like elections. Videos with AI-generated content can also be removed if they violate YouTube’s guidelines.  

“2024 has been a busy year for elections, and YouTube remains steadfast in our commitment to ensuring that our platform empowers and informs voters while protecting them from harmful misinformation or disinformation campaigns,” the company said.

YouTube is also working with Google’s Threat Analysis Group to identify and combat interference from foreign adversaries. 

Snap

Evan Spiegel, CEO of Snap Inc., speaks onstage during the Snap Partner Summit 2023 at Barker Hangar on April 19, 2023 in Santa Monica, California. 

Joe Scarnici | Getty Images Entertainment | Getty Images

Snap has been offering users in-app resources to learn about the election and local issues this year, according to an April blog post

The company partnered with Vote.org to allow users to register to vote, check their registration status and sign up for election reminders within the app. Snap said it helped 1.2 million users register to vote during 2020. 

Snap has been covering the election through its flagship news show, “Good Luck America,” and it’s working with “a range of trusted media partners,” including NBC News’ “Stay Tuned,” that have also been providing coverage on the platform, the blog post said. 

Political ads are permitted on Snapchat, and are vetted through a human review process, the company said. Snap partners with the nonprofit Poynter Institute to fact-check the statements that appear in the ads. The company also vets the purchasers of political ads through a registration and certification process. 

Snap didn’t respond to a request for comment. 

Reddit 

Traders work as Reddit’s logo is displayed, at the New York Stock Exchange (NYSE) in New York City, U.S., March 21, 2024. 

Brendan Mcdermid | Reuters

Reddit has been sharing information about early voting, voter registration, poll worker recruitment and other election-related resources through its “u/UpTheVote” account and with on-platform notification channels, according to a February blog post. 

The platform’s search function surfaces official voting resources as well, the company said. Reddit prohibits content that’s intended to prevent people from voting, like posts with inaccurate polling locations and times.

The company has also coordinated a series of Ask Me Anything, or AMA, sessions with experts, nonprofit organizations and other nonpartisan groups to bring users accurate election information. In order to run political ads, Reddit said the advertisers have to have their candidate or an official campaign representative participate in an AMA.

Reddit allows users to opt-out of viewing political ads. The company prohibits political attack ads and said political ads that contain AI-generated content must be clearly labeled, according to the blog post. The company also prohibits AI-generated content that’s intended to mislead users.

Reddit didn’t respond to a request for comment. 

— CNBC’s Jonathan Vanian contributed to this report.

Don’t miss these insights from CNBC PRO

Source link

U.S