Swarms of coordinated AI bots can flood online communities and threaten democracy, scientists warn
UBC Computer Science Professor Kevin Leyton-Brown and co-authors discuss emerging threat of malicious AI and propose defenses to protect against them
In a new article in Science, researchers warn of a next generation of AI-driven personas that can coordinate and adapt in real time to infiltrate online groups and influence public opinion. The authors of the paper, including UBC Computer Science Professor Kevin Leyton-Brown, describe how the combination of large language models and coordinated AI agents can allow malicious AI swarms to imitate human social dynamics. By creating the illusion of public consensus, these bots can influence people’s opinions on key political issues, potentially threatening democracy.
According to the authors, malicious AI swarms will be harder to detect than the previous generation of “copy-paste” bots, since they can generate consistent narratives, adapt in real-time with feedback and vary its tone and content. Moreover, these agents can map social network structures, track trending topics and infiltrate communities to gain followers and influence online discussions.
“The danger isn’t only false content — it’s synthetic consensus: the illusion that ‘everyone agrees,’ engineered at scale,” says Dr. Daniel Thilo Schroeder, Research Scientist at SINTEF and first author of the paper. “Instead of repeating a script, swarms iterate: they probe audiences with many variants, measure responses, and amplify the winners.”
Political campaigns in the past have seen the spread of AI-fabricated images or videos, known as deepfakes, to influence elections. Moreover, by flooding the internet with their generated content, AI swarms can contaminate training data for large language models, eroding reliability of these platforms.
“We shouldn’t imagine that society will remain unchanged as these systems emerge,” says Dr. Leyton-Brown. “A likely result is decreased trust of unknown voices on social media. This might sound like a good thing, but it could empower celebrities and make it harder for grassroots messages to break through.”
The paper outlines several key ways to safeguard against AI swarms, such as monitoring coordination patterns in real time, implementing stronger verifications for accounts and publishing incidence reports. The authors also propose the use of agent-based simulations, which are computational models that simulate and give insight into how autonomous agents interact. Lastly, the authors outline policy changes that may be helpful, such as reducing monetization of inauthentic engagement and increasing accountability.
“We’re not predicting outcomes, but the capability curve is clear: coordinated AI systems lower the cost of influence and raise the stakes for democracy,” says Dr. Jonas R. Kunst, Professor of Communication at BI Norwegian Business School and last author of the paper.