CSE says it is ‘very unlikely’ that AI-enabled adversaries will ‘fundamentally undermine’ the 2025 federal election
OTTAWA — Coordinated social media influence campaigns by China rife with disinformation, personalized spam emails using stolen personal data, spoof media websites run by Russian propagandists and even fake porn of politicians.
Recommended Videos
Those are just some of the ways hostile countries are harnessing artificial intelligence to influence or undermine future Canadian elections, according to the Communications Security Establishment’s (CSE) latest Cyber Threats to Canada’s Democratic Process report.
The latest biennial report by Canada’s cybersecurity agency is a stark and sobering warning of how the recent boom in affordable and quality AI-powered software can pose serious and growing threats against Canadians.
Those tools range from generative AI tools like large language model chatbots, like ChatGPT, and audio- and video- generating programs to data scraping and processing software.
“Over the past two years, these tools have become more powerful and easier to use. They now play a pervasive role in political disinformation, as well as the harassment of political figures. They can also be used to enhance hostile actors’ capacity to carry out cyber espionage and malicious cyber activities,” reads the report.
The report says Canada’s main foreign state adversaries — China, Russia and Iran — use AI to orchestrate large online disinformation campaigns, process tremendous quantities of stolen or purchased date for “targeted influence and espionage campaigns” and generate fake images and videos (called “deepfakes”) of politicians to discredit them.
That includes creating deepfake pornography of politicians and public figures, almost always women.
CSE says the use of generative AI by hackers, propagandists and other malevolent state and non-state actors has exploded since its last report on threats to democratic processes in 2023.
The previous report highlighted only one case of generative AI targeting an election in the world between 2021 and 2023. Fast-forward two years and the agency says there were 102 reported cases of generative AI used to interfere in or influence a total of 41 elections around the world.
But that doesn’t mean that state actors like China, Russia and Iran won’t try to interfere before and during this year’s writ period.
“When targeting Canadian elections, threat actors are most likely to use generative AI as a means of creating and spreading disinformation, designed to sow division among Canadians and push narratives conducive to the interests of foreign states,” reads the report.
“Canadian politicians and political parties are likely to be targeted by threat actors seeking to conduct hack-and-leak operations,” it continues.
The report says both commercial data brokers and Canadian political parties have tremendous amounts of “politically relevant data” about Canadian voters, and hostile actors want it.
Whether through purchase, open-source data mining or theft, China and Russia are conducting “massive data collection campaigns” that includes political campaign data or individualized information such as people’s shopping habits, health records, and browsing and social media activity, CSE says.
“Foreign actors are almost certainly attempting to acquire this data, which they can then weaponize against Canadian democratic processes,” reads the report.
“Cyber actors can combine purchased or stolen data with public data about Canadians to create targeted propaganda campaigns, built on predictive analytics and using AI-generated content.”
More concerning is that hostile states like China, Russia and Iran already have the capabilities to do so, CSE notes, though their interest in deploying such techniques against Canadians varies.
China is likely to use them to push favourable narratives and spread disinformation to voters, whereas Russia and Iran are much less focused on Canada currently.
Fake AI-generated images and videos are also proliferating as the tools to do so are constantly getting better and cheaper.
One example of a weaponized deepfake targeted U.S. vice presidential candidate Tim Walz. Russian propaganda group Storm-1679 used AI to create a fake video of a former high school student where Walz taught claiming the politician had abused him.
The fake video was released one month before the 2024 U.S. election and gained “significant attention” when it was covered by influential right-wing American commentators.
“To create the deepfake, Storm-1516 had likely researched students at Walz’s former school, used AI to create a fake video based on their likeness, and then deployed it against Walz,” reads the report.
National Post
[email protected]
Our website is the place for the latest breaking news, exclusive scoops, longreads and provocative commentary. Please bookmark nationalpost.com and sign up for our politics newsletter, First Reading, here.