The federal government needs to act quickly to regulate artificial intelligence, a top AI pioneer said, warning that the technology’s current trajectory poses major societal risks.
Yoshua Bengio, known as the “godfather” of AI, told MPs on Monday that Ottawa should immediately implement the law, even if it’s not perfect.
The scientific director of Mira, the Quebec AI Institute, says a “superhuman” intelligence as smart as humans could be developed within the next 20 years, or even just a few years.
“We’re not ready yet,” Bengio said.
One short-term risk of AI is that deepfake videos will be used to spread disinformation, he said. In videos like this, he uses AI to make it appear as if celebrities are saying things they didn’t actually say, or doing things that never happened.
The technology can also be used to interact with people through text or dialogue, and “can trick social media users into changing their minds about political issues,” Bengio said.
“There is great concern about AI being used in a political direction that goes against the principles of our democracy.”
There are concerns that more sophisticated systems could be used for cyberattacks in a year or two.
Programming AI systems is getting better and better.
“When these systems become powerful enough to defeat current cyber defenses and industrial digital infrastructure, we are in trouble,” Bengio said.
“Especially if these systems fall into the wrong hands.”
Bill proposes to regulate AI systems
The House of Commons industry committee, where Bengio testified Monday, is considering the Liberal government’s bill to amend privacy laws and begin regulating some artificial intelligence systems.
While the bill as drafted would give the government time to develop regulations, Bengio said some provisions should take effect immediately.
“With the current approach, it will take about two years before enforcement is possible,” he said.
One of the first rules he said he would like to see introduced is a registry that would require systems with a certain level of functionality to report to the government.
Bengio said that in that case, the responsibility and cost of demonstrating safety would fall on the big technology companies that develop these systems, rather than on taxpayers.
Bill C-27 was first drafted in 2022 to target what are called “high-impact” AI systems.
Bengio said the government should change the law’s definition of “high-impact” to include technologies that pose national security or societal threats.
This could include any AI system that bad actors could use to design dangerous cyber attacks or weapons, or systems that find ways to self-replicate despite programming instructions to the contrary.
Generative AI systems like ChatGPT, which can create text, images, and videos, have become widely available to the public since this bill was first introduced.
The government said it plans to amend the law to reflect this.
Liberals say they aim to require companies to take steps to ensure that the content they create can be reliably identified as being generated by AI.
“Covering general-purpose AI systems is very important because they are also the systems that can be the most dangerous if misused,” Bengio said.
Catherine Regis, a professor at the University of Montreal, also told the committee on Monday that governments need to act urgently, citing recent “rapid developments in AI that everyone is familiar with.”
Speaking in French, he noted that AI regulation is a global effort and that if Canada wants to have a say, it needs to figure out what to do at the national level.
“Decisions are being made globally and will affect all countries,” she said.
Establishing a clear and firm vision at the Canadian level is “one of the prerequisites for building credible structures and playing an influential role in global governance,” she said.
Watch | AI pioneer Yoshua Bengio shares his main concerns: