In this post. OpenAI blocked more than 250000 attempts to generate images of 2024 U.S. presidential candidates on ChatGPT to prevent political misinformation through AI.
This year, the volume of fake content has increased by 900%, and U. S. intelligence agencies are linking some of it to Russia's attempts to disrupt the election.
The New York Attorney General warned that artificial intelligence chatbots often provide inaccurate information about the election and advised people to trust official sources. OpenAI confirmed that its popular artificial intelligence tool ChatGPT blocked more than 250,000 requests for images of the main candidates for the 2024 US presidential election. Users repeatedly tried to get ChatGPT to create images of President-elect Donald Trump, Vice President-elect Kamala Harris, incumbent President #Joe Biden, Minnesota Governor Tim Walz and Vice President-elect JD Vance. However, OpenAI responded to all of these requests with a denial. This was presumably to prevent ChatGPT from becoming a pawn in the high-stakes disinformation game.
In the run-up to the U. S. elections, OpenAI wanted ChatGPT to stay out of the election. Political fakes, fake news created by artificial intelligence, and outright lies spread like wildfire across the internet. Clarity, a company specializing in machine learning, reports that fake news has increased 900% this year.
And according to U. S. intelligence agencies, some of that information is linked to Russian agents trying to control U. S. politics.
In an October report, OpenAI showed just how bad things have gotten. They tracked 20 shadowy operations around the world that use artificial intelligence tools to influence people's minds online. Some of them posted artificial intelligence-generated articles on websites.
others posted propaganda on fake social media accounts. However, according to the OpenAI team, they were able to shut down these networks before they became widespread.
Read us at: Compass Investments