Home Politics Experts caution about the spread of AI and ‘deepfake’ images ahead of election

Experts caution about the spread of AI and ‘deepfake’ images ahead of election

by

NASHVILLE, Tenn. (WKRN) — The old adage said that you can’t trust everything you read online, but increasingly, you can’t trust everything you see, either.

Artificial intelligence, and what are called “deepfake” videos, have spread across social media platforms. With the election looming, calls for regulations have grown.


Creator of Trump and Harris deepfakes warns of AI future

With a simple click online, you could see a post meant to tug on your heartstrings. Perhaps it shows photo of a 3-year-old boy. The woman posting about it could be asking for help. However, in reality, the whole post is fake.

Associate professor of psychology and human development at Vanderbilt University Lisa Fazio said that these posts are designed to spread.

“People tend to make them [deepfake posts] things people want to share and want to spread,” associate professor of psychology and human development, Lisa Fazio, said. “Information about a lost kid — you want to make sure the kid gets found.”

Fazio explained that AI’s ability can be daunting, especially with the technology being able to manipulate images without being noticed.

That’s why some police departments across Middle Tennessee have given the same warning as the Mt. Juliet Police Department, urging people to watch out for fake posts.


McAfee launches deepfake detector to flag AI-generated content

“People have a hard time spotting misinformation in the wild,” Fazio added. “It’s difficult for us to notice when something contradicts our prior knowledge or what we should think about the world and images are a really powerful form of information. Deepfakes, or edited images, or images just used out of context can be a really powerful way of altering our beliefs.”

It’s not only happening through social media, though. AI has now entered politics. With the presidential election in November, some are now worried about the lack of federal regulations on deceptive deepfakes in political ads.


Trump shares AI photo of fake endorsement from Taylor Swift

For months, Public Citizen has pushed the federal government to ban political deepfake images, videos, or voiceovers — which are increasingly harder to detect.

“It’s entirely possible that we are going to have late-breaking deepfakes that show a candidate drunk or saying something racist or behaving in outrageous ways when they’ve never said any of those things,” Public Citizen co-president Robert Weissman explained.

The Federal Communications Commission is racing to finalize a rule mandating that all TV and radio ads made with AI include a disclaimer.

You may also like