OpenAI has reported rejecting over 250,000 requests to generate deepfake images of US election candidates through its AI tools, including its image-generation platform DALL-E. The company implemented these rejections as part of its proactive safety measures, designed to prevent the misuse of its technology in the run-up to the election.
According to a blog update released on Friday, OpenAI explained that it had blocked attempts to create AI-generated images of major political figures such as President-elect Donald Trump, Joe Biden, Kamala Harris, and their respective running mates. These steps were taken to ensure that its AI models would not be used to spread misleading or harmful content during a highly sensitive election period.
“These measures are particularly crucial in the context of elections, where the risk of deceptive or malicious uses of our technology is higher,” OpenAI stated in the blog post. The company added that it had not observed any successful influence campaigns or viral misinformation linked to the US election spreading through its platforms.
This is not the first time OpenAI has had to intervene to prevent misuse of its tools. Earlier this year, it thwarted an Iranian influence operation known as Storm-2035, which had been attempting to generate fake political content under the guise of both conservative and liberal news sources. In response, OpenAI banned the accounts linked to the operation. Additionally, in October, the company revealed it had disrupted more than 20 other deceptive campaigns from around the world that had tried to exploit its platforms.
Despite these efforts, OpenAI’s report emphasized that none of the election-related influence operations on its platform were able to generate widespread engagement or viral content.