Skip to content
Home News OpenAI Uses Its Own Models To Combat Election Interference

OpenAI Uses Its Own Models To Combat Election Interference

OpenAI Uses Its Own Models To Combat Election Interference

OpenAI, the creators behind the widely-used ChatGPT generative AI tool, recently issued a report revealing that it has successfully blocked over 20 deceptive operations and dishonest networks globally in 2024 thus far. These operations varied significantly in their goals, scale, and focus, being employed to generate malware and fabricate fake media stories, biographies, and website articles.

The findings indicate that OpenAI has conducted a thorough analysis of the activities it prevented, offering critical insights from its investigation. According to the report, “While threat actors adapt and experiment with our models, there is currently no indication of significant advances in their capabilities to produce entirely new malware or to cultivate viral audiences.”

This is particularly crucial as 2024 is an election year in several countries, including the United States, Rwanda, India, and within the European Union. For instance, in early July, OpenAI took action against numerous accounts that were generating comments related to the elections in Rwanda, which were disseminated by various accounts on X (formerly Twitter). It is reassuring to learn that OpenAI asserts these malicious actors have struggled to make substantial progress with their campaigns.

Another notable achievement for OpenAI was disrupting a threat actor based in China known as “SweetSpecter,” who was attempting to perform spear-phishing attacks targeting the corporate and personal emails of OpenAI staff. The report further details that in August, Microsoft disclosed a collection of domains linked to an Iranian covert influence operation dubbed “STORM-2035.” “Following their report, we investigated, disrupted, and reported an associated set of activities on ChatGPT,” the report noted.

Furthermore, OpenAI indicated that social media posts created by their models garnered minimal engagement, receiving few or no likes, comments, or shares. The organization is committed to continuing its vigilance in anticipating how malicious actors might exploit advanced models for harmful purposes, and intends to take appropriate measures to thwart these efforts.

  • rukhsar rehman

    A University of California alumna with a background in mass communication, she now resides in Singapore and covers tech with a global perspective.