Tech

AI-Generated Propaganda Is Just as Persuasive as the Real Thing, Worrying Study Finds

Propaganda from popular AI tools “could blend into online information environments on par with…existing foreign covert propaganda campaigns."
AI-Generated Propaganda Is Just as Persuasive as the Real Thing, Worrying Study Finds

Researchers have found that AI-generated propaganda is just as effective as propaganda written by humans, and with a bit of tweaking can be even more persuasive. 

The worrying finding comes as nation-states are testing AI’s usefulness in hacking campaigns and influence operations. Last week, OpenAI and Microsoft jointly announced that the governments of China, Russia, Iran, and North Korea were using their AI tools for “malicious cyber activities.” This included translation, coding, research, and generating text for phishing attacks. The issue is especially pressing with the upcoming U.S. presidential election just months away. 

Advertisement

The study, published this week in the peer-reviewed journal PNAS Nexus by researchers from Georgetown University and Stanford, used OpenAI’s GPT-3 model—which is less capable than the latest model, GPT-4—to generate propaganda news articles. The AI was prompted with examples of real examples originating from Russia and Iran, which were identified by journalists and researchers. The topics covered six main propaganda theses, such as the conspiracy that the U.S. created fake reports that said Syria’s government used chemical weapons. Another was that Saudi Arabia had committed funding to the U.S.-Mexico border wall. The researchers did not exclude any of GPT-3’s propaganda outputs, except if they were either too short or too long. 

Next, the researchers polled 8,221 U.S. adults about whether they agreed with the thesis statements of the original, human-authored propaganda pieces. Without reading the articles first, only 24.4 percent of respondents agreed. That jumped to 47.4 percent after they read the articles. 

When the study participants read AI-generated propaganda on the same topics, they were roughly equally persuaded: “43.5 percent of respondents who read a GPT-3-generated article agreed or strongly agreed with the thesis statement, compared to 24.4 percent in the control (a 19.1 percentage point increase),” the authors wrote.

“This suggests that propagandists could use GPT-3 to generate persuasive articles with minimal human effort, by using existing articles on unrelated topics to guide GPT-3 about the style and length of new articles,” they continued.

The authors note that it might not be realistic to expect a nation-state adversary to simply use unsorted GPT outputs for propaganda, and that humans could exclude the least convincing articles. With a little bit of curation—specifically, they excluded two articles that did not advance the propaganda thesis—the researchers found that suddenly 45.6 percent of people agreed with the propaganda, which was not a “statistically significant” difference compared to the human-authored pieces. 

When the researchers took even more steps to improve the outputs, such as by editing the original pieces for grammatical errors and prompting the AI with the condensed thesis statement, they found that the AI-generated propaganda outperformed the originals. Notably, the authors did not find significant differences in agreement between study participants when they broke down the responses by variables such as political leanings and news consumption.

“Our findings suggest that GPT-3-generated content could blend into online information environments on par with content we sourced from existing foreign covert propaganda campaigns,” the authors wrote.