WITH the US presidential elections coming up in less than a month, political campaigns and misinformation efforts becoming more rampant. The use of generative AI technologies, specifically large language models (LLMs) and large image models make these campaigns even more potent and potentially dangerous.
An experiment-driven research by Ben Gelman, Adarsh Kyadige of Sophos X Labs, shows how applying generative AI, bad actors could tailor disinformation campaigns to affect the outcome of any election on a massive scale with relatively little effort.
Historically, misinformation campaigns have leveraged political and ideological views to sway individuals, inciting them to act or even pulling them into scams. These efforts, particularly when backed by well-funded groups, have significant societal consequences. But until recently, creating persuasive and targeted misinformation required substantial resources, time, and labor. Now, generative AI provides a shortcut to producing content that resonates deeply with individuals, raising the stakes for potential misuse.
This approach differs from traditional identity-based or mass-targeted campaigns, which can often backfire when the content doesn’t align with a recipient’s views. By contrast, AI-driven microtargeting enables the creation of highly personalized messages that align with each individual’s political beliefs, thereby increasing the effectiveness and reach of disinformation efforts.
“We’ve already seen how generative AI tools can be used in ongoing fraud campaigns–sending generative text to scam victims, creating deceptive social media images, and using AI to produce deepfake videos and voices for social engineering attacks,” the authors of the report say.
The same tools that have fueled these fraudulent activities are now being applied to political misinformation and manipulation, particularly on social media platforms.
Data from recent studies show that in countries with high internet penetration rates, such as the United States, Canada, and European nations, upwards of 80 percent of the population use social media as a primary source of news and information. This reliance on digital platforms makes these populations particularly vulnerable to AI-generated political disinformation.
For example, during the 2020 U.S. elections, a report from the Stanford Internet Observatory highlighted that disinformation reached 65 percent of voters through social media, often via hyper-targeted political ads and fabricated news stories. As generative AI becomes more advanced, the potential for similar, even more personalized manipulation grows.
One of the more concerning aspects of AI-driven misinformation is how it amplifies the scale and reach of these campaigns. Previously, if a political message or piece of misinformation was distributed to a large audience, it ran the risk of alienating those who disagreed with it. But with AI, misinformation can be tailored to specific individuals who are more likely to accept and agree with it.
“If someone includes intentional misinformation in a bulk email, people who don’t agree with that misinformation will be turned away from the campaign. But in the method we explored in our research, misinformation is only added to the email when that specific individual is likely to agree with it. The ability to do this can completely change the scale at which misinformation can propagate,” the reports outlines.
This hyper-targeting of individuals with misinformation is made possible by combining generative AI tools with data scraped from social media and other public sources. By analyzing user profiles–complete with details like political leanings, hobbies, and frequently visited locations–AI models can craft political messages specifically designed to resonate with each user. In one experiment, the researchers prompted
OpenAI’s GPT-4 model to generate synthetic user profiles that simulated real-world data, including personal interests, demographic information, and political affiliations. These profiles were then used to create microtargeted political emails that tailored the content of a campaign message to fit each individual’s views.
The result was striking. The AI-generated emails presented unique arguments to each recipient, designed to persuade them to support a particular political campaign, even if they were initially opposed to the ideas being presented. This level of personalization is particularly dangerous because it allows misinformation to slip past individuals’ critical defenses.
“By fabricating that point specifically to people who support it, the lie is more insidious and effective,” the research notes. This method of disinformation makes it harder for individuals to recognize that they are being manipulated, as the message is crafted specifically to appeal to their pre-existing beliefs and biases.
The implications of these findings are far-reaching. In an age where information flows freely across digital platforms, the ability to generate large-scale, personalized disinformation threatens not only individual voters but the entire democratic process.
The report warns that “with minor reconfiguration, a user could generate anything from benign campaign material to intentional misinformation and malicious threats.” This means that even small organizations or individuals, with access to AI tools, could launch large-scale misinformation campaigns capable of influencing elections, undermining trust in political institutions, and destabilizing societies.