close
close

Viral celebrity deepfake ad warns of using AI to trick you into not voting.

Influencing elections is increasingly relying on artificial intelligence and deepfakes. This is why they are used as a warning sign in a viral public advertisement.

“In this election, bad actors will use AI to trick you into not voting,” the ad says. “Don’t fall for it. This threat is very real.”

The video “Don’t Let AI Steal Your Vote” features Hollywood stars such as Rosario Dawson, Amy Schumer, Chris Rock and Michael Douglas. But many of them are not real. Douglas, Rock and Schumer, for example, are deepfakes.

“The artists involved were very enthusiastic about it,” Joshua Graham Lynn, CEO and co-founder of RepresentUs, the national, nonpartisan anti-corruption organization behind the video, told Scripps News.

“Everyone you see there has either shown us their likeness or volunteered in person. They were all really excited to support the election because they know this is a really important election,” Lynn added.

RELATED STORY | Scripps News underwent a deepfake to see how AI could impact elections

The video, which has been viewed over 6 million times on YouTube, warns voters to pay closer attention to what they see and hear online.

“If something’s wrong, it probably is,” the real Rosario Dawson says in the video.

“Right now it's so hard to tell what's real and what's fake on the internet,” Lynn said. “You just look at every new video and sometimes you can’t tell if it was created entirely by AI.”

“Technology is advancing rapidly and more importantly, malicious actors will always be at the forefront,” he added.

Disinformation experts and community leaders have suggested that AI-generated content is being used to sow chaos and confusion around the election. As ABC News previously reported, the Department of Homeland Security warned state election officials that AI tools could be used to “create fake ballots, impersonate election workers to gain access to sensitive information, generate fake voter calls.” to overwhelm call centers, and even more convincingly, spread false information online.”

“So we want voters to use their brains,” Lynn said. “Be skeptical if you see something that tells you not to participate. If you see something about a candidate that you support, question it. Check it again.”

While deepfakes could be used to spread election disinformation, experts warn they could also be used to destroy the public's trust in official sources, facts or their own instincts.

“We have situations where we all start to doubt the information we find, especially information that relates to politics,” Kaylyn Jackson Schiff, a professor at Purdue University, told Scripps News. “And then in the election environment we’re in, we’ve seen examples of claims that real images are deepfakes.”

Schiff said this phenomenon, this widespread uncertainty, is part of a concept called “The Liar's Dividend.”

“Because of widespread knowledge of deepfakes and manipulated media, one can credibly claim that real images or videos are fake,” she said.

RELATED STORY | San Francisco is suing websites that create deepfake nude photos of women and girls

Schiff, who is also co-director of Purdue's Governance and Responsible AI Lab, and Purdue University graduate student Christina Walker have been tracking political deepfakes since June 2023, recording over 500 cases in their political deepfakes incident database.

“For a lot of the things we capture in the database, the communication goal is actually satire, so it's almost more like a political cartoon,” Walker told Scripps News. “It’s not always because everything is very malicious and intended to cause harm.”

Still, Walker and Schiff say some of the deepfakes cause “reputational damage” and even parody videos intended for entertainment can take on new meaning when shared out of context.

“There is still concern that some of these deepfakes, initially shared for fun, could deceive people who do not know the original context when the post is later reshared,” Schiff said.

While the deepfakes in the “Don't Let AI Steal Your Vote” video are difficult to spot, Scripps News took a closer look and found that visual artifacts and shadows disappeared. Deepfakes technology has improved, but Walker said there are still telltale signs at this point.

“It could be extra or missing fingers, blurry faces, writing in the picture, not quite right or not aligned correctly. All of these things can indicate that something is a deepfake,” Walker said. “The better these models get, the harder it becomes to say. But there are still ways to check the facts.”

Fact-checking a deepfake or other video that triggers an emotional response, particularly election-related, should start with official sources such as secretaries of state or vote.gov.

“We encourage people to seek additional sources of information, especially when it comes to politics and an election is approaching,” Schiff said. “Also, thinking about who the source of the information is and what motivations they might have for sharing that information.”

“If anything tells you as a voter, ‘Don’t vote. Things have changed thing at the moment.