OpenAI Blocks Iranian Accounts Targeting US Election

Mark Eisenberg
Photo: Finoracle.net

OpenAI's Action Against Election Interference

OpenAI, in a decisive move, announced it had terminated accounts associated with an Iranian group, Storm-2035, for using its ChatGPT platform to create content aimed at swaying the U.S. presidential election. This group reportedly employed the AI to draft both extensive articles and brief social media posts addressing sensitive topics such as candidate commentary, the Gaza conflict, and Israel's Olympic involvement.

Limited Impact of the Disinformation Campaign

Despite these efforts, OpenAI's investigations revealed that the operation did not achieve significant traction. Most of the social media posts from these accounts failed to garner engagement, with minimal likes, shares, or comments, and there was no substantial evidence of widespread sharing of web articles. As a result, the implicated accounts have been barred from accessing OpenAI's services.

Continuous Monitoring and Policy Enforcement

OpenAI remains vigilant, continuing to monitor its platforms for further violations. This proactive approach is part of an ongoing effort to prevent misuse of AI technologies in misleading or polarizing campaigns.

Background on Storm-2035

This move follows a Microsoft intelligence report from August, which identified Storm-2035 as an Iranian network operating four websites disguised as news outlets. These sites aimed to engage U.S. voter groups with divisive messaging on polarizing issues, including the presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.

Current U.S. Presidential Race

The backdrop to these events is a highly competitive U.S. presidential race between Democratic candidate Kamala Harris and Republican contender Donald Trump, with the election date set for November 5. OpenAI's intervention highlights the broader challenge of safeguarding electoral processes against covert influence operations.

Previous Disruptions by OpenAI

In a related context, OpenAI disclosed in May that it had already disrupted five covert operations that attempted to exploit its AI models for deceptive activities across the internet. These actions underscore the firm's commitment to maintaining the integrity of its platforms and services.

Share This Article
Mark Eisenberg is a financial analyst and writer with over 15 years of experience in the finance industry. A graduate of the Wharton School of the University of Pennsylvania, Mark specializes in investment strategies, market analysis, and personal finance. His work has been featured in prominent publications like The Wall Street Journal, Bloomberg, and Forbes. Mark’s articles are known for their in-depth research, clear presentation, and actionable insights, making them highly valuable to readers seeking reliable financial advice. He stays updated on the latest trends and developments in the financial sector, regularly attending industry conferences and seminars. With a reputation for expertise, authoritativeness, and trustworthiness, Mark Eisenberg continues to contribute high-quality content that helps individuals and businesses make informed financial decisions.​⬤