The researchers noted that the suspected bot campaign around the recent Pierre Poirierbre event shows how easily generative artificial intelligence (AI) tools are accessible to anyone looking to influence political messaging online.
In July, social media platform “X” was inundated with posts following the Conservative leader’s tour of northern Ontario.
The posts purported to be from people who attended Poirievre’s event in Kirkland Lake, Ontario, but were actually made by accounts in Russia, France and elsewhere, many of which contained similar messages.
Researchers from Concordia University and the University of Ottawa recently ran several tests using five generative AI tools to see if they could create political messages similar to those seen in the July bot campaign.
The researchers asked the freely available AI platform to generate 50 different statements describing rallies held by Canadian political leaders: Poirievre, Prime Minister Justin Trudeau, New Democratic Party Leader Jagmeet Singh, Bloc Quebecois Leader Yves-François Blanchet and Green Party Leader Elizabeth May.
All platforms except one produced the requested political message when prompted.
“It was easy and it was quick. It just shows that there are flaws in our regulatory system right now,” said Elizabeth Dubois, a professor at the University of Ottawa and one of the project’s leaders.
“Platforms sometimes say, ‘We don’t want this to happen in elections. We don’t want our tools to be used for politics. That’s against our terms,’ but the reality is that most tools allow it anyway.”
Dubois said findings like these are concerning because the use of AI tools to generate political messages could undermine the fairness of political campaigns and elections.
The Conservative Party denied any involvement in the Kirkland Lake bot campaign, and a separate report from the Toronto Metropolitan University’s Social Media Lab concluded the campaign was likely the work of amateurs.
Dubois said it’s likely some form of automation was used in the Kirkland Lake incident, but the specific tool that generated the messages is unclear. The report noted that some of the language ChatGPT used in responding to the test prompts, such as “electrical,” “buzzing” and “tactile,” is similar to that seen in the July incident.
AI messages go unnoticed by detection tools
Dubois and his colleagues ran the generated messages through three of X’s AI text detection tools, and all three were unable to determine whether the messages were AI-generated or not, but Dubois said he wasn’t surprised by the results.
“These types of tools aren’t particularly helpful,” she said.
When it comes to X specifically, Dubois said part of the problem is that the platform’s 280-character limit limits the data that detection tools can handle.
“That is not given [the detection tools] “We don’t have enough data to determine if this is the result of a generative AI tool. There isn’t enough text in one tweet or post in X to be able to make a determination,” she said.
Dubois said some technical work would be required to automatically post AI-generated messages to social media sites.
“To do this on a large scale, you would need to write a script to auto-post, which in itself is not that difficult, but it does require some technical knowledge,” she said.
Of the five AI platforms used, only Google’s Gemini refused to generate the requested message, and Microsoft Copilot initially rejected a request to generate a message about Trudeau, but then accepted after the prompt was slightly altered.
Dubois said governments should consider introducing regulations on the use of generative AI in politics, but that AI companies also needed to practice more self-governance.
“Frankly, government regulation takes time. AI will always be one step ahead, at least in the near future,” she said.
“These AI companies need to not just say, ‘You should not use our tools to craft political messages to influence elections,’ but actually build methods into their systems to stop this from happening.”
The NDP has asked the Electoral Commission to investigate the bot campaign in July, but the commission’s secretariat has not yet said whether it will investigate the matter.
Dubois said research into how bots are being used in political contexts should have been done years ago.
“This is a conversation that’s long overdue,” she said.