Home Politics Election disrupters using ChatGPT; OpenAI says it’s not working

Election disrupters using ChatGPT; OpenAI says it’s not working

by

(NewsNation) — Bad actors who want to disrupt elections around the world have attempted to use ChatGPT’s artificial intelligence technology to their advantage, a report by creator OpenAI revealed.

The artificial intelligence powerhouse’s 54-page report showed that “more than 20 operations and deceptive networks from around the world … attempted to use our models.”

Activity utilizing OpenAI’s technology included debugging malware, writing articles and generating content for fake social media accounts.


AI-powered deepfake detection tool released by non-profit

The report found that AI often worked as a tool in the operation, not generating the idea for the deception itself but instead helping “perform tasks in a specific, intermediate phase of activity” prior to publishing.

Less than one month away from America’s presidential election and 10 months deep into a significant election year worldwide, the findings point to fears that many hold about evolving technology and political contests.

In total, the report estimates that 2024’s international elections will impact more than 4 billion people in over 40 countries. For roughly a decade, misinformation surrounding elections has run rampant on the internet, though generative AI and products such as ChatGPT pose a uniquely new challenge.

But OpenAI said the attempts to use its technology haven’t been all that successful.

Citing specific instances of attempted election meddling in Rwanda, Azerbaijan, Iran, the United States and other countries, the organization said no election-related operations attracted “viral engagement” or developed “sustained audiences.”


Is the stock market in an AI bubble?

For example, a covert Iranian operation generating social media comments and long-form articles about the upcoming United States election — among other topics, like the war in Gaza, Israel’s relationship with the West and more — was regularly receiving very few or no likes at all.

That same story of low engagement and minimal sharing was present across most cases identified by OpenAI.

“It is noteworthy that, of the case studies in this report, the deceptive activity that achieved the greatest social media reach and media interest was a hoax about the use of AI, not the use of AI itself,” the report reads in part.

You may also like