Yap Arians | NurPhoto via Getty Images
OpenAI is increasingly becoming the platform of choice for online actors looking to influence democratic elections around the world.
in a 54 page report ChatGPT’s creators said in a post published on Wednesday that it disrupted “more than 20 operations and deceptive networks from around the world trying to use our model.” Threats range from AI-generated website articles to social media posts from fake accounts.
The company said its update on “influence and cyber operations” was intended to provide a “snapshot” of what it is seeing and identify “an area of activity that we believe can inform the debate about how AI fits into the broader threat landscape.” “Preliminary trend of the group”.
OpenAI’s report comes less than a month before the U.S. presidential election. Beyond the United States, this year is an important year for elections around the world, affecting more than 4 billion people in more than 40 countries. The rise of artificial intelligence-generated content has raised concerns about serious election-related misinformation, with the number of deepfakes increasing 900% from the same period last year, according to data from machine learning company Clarity.
Misinformation in elections is not a new phenomenon. This has been a major issue dating back to the 2016 US presidential campaign, when Russian actors found cheap and easy ways to spread false content on social platforms. In 2020, social networks were awash with misinformation about COVID-19 vaccines and election fraud.
Today, lawmakers’ concerns are more focused on the rise of generative artificial intelligence, which took off in late 2022 with the launch of ChatGPT and is now being adopted by companies of all sizes.
OpenAI wrote in the report that election-related AI applications “range in complexity, from simple content generation requests to complex, multi-stage efforts to analyze and respond to social media posts.” OpenAI said social media content was mainly related to elections in the United States and Rwanda, followed by elections in India and the European Union.
In late August, an Iranian company used OpenAI’s products to generate “long-form articles” and social media comments about the U.S. election and other topics, but the company said most of the identified posts received few or no likes. , share and comment. In July, the company banned a Rwandan-based ChatGPT account from posting election-related comments on X. OpenAI wrote that it was able to solve the case within 24 hours.
In June, OpenAI addressed a covert operation that used its products to generate commentary about the European Parliament elections in France and politics in the United States, Germany, Italy and Poland. The company said that while most social media posts it found received few likes or shares, some real people did reply to the AI-generated posts.
The company wrote that all election-related actions were unable to attract “viral engagement” or build “sustained audiences” through the use of ChatGPT and OpenAI’s other tools.
watch: The electoral outlook could be positive for China or it could be very negative