Representatives of Big Tech companies say the Liberal government’s bill to begin regulating some artificial intelligence systems is too vague.
Amazon and Microsoft executives told MPs at a meeting of the House of Commons industry committee on Wednesday that Bill C-27 does not sufficiently differentiate between high-risk and low-risk AI systems.
The companies said complying as required by law would be costly.
Nicole Foster, Amazon’s director of global artificial intelligence and Canadian public policy, said using the same approach for all applications is “highly impractical and can unintentionally stifle innovation.” Stated.
He said the use of AI by law enforcement officers would have a big impact in all cases, even when officers are using AutoCorrect to fill out traffic tickets. It is said that there is.
“Laws and regulations must clearly distinguish between high-risk applications and those that pose little or no risk. This is a core principle we need to get right,” Foster said.
“Canadian companies, large and small, must be very careful about imposing regulatory burdens on low-risk AI applications that have the potential to deliver much-needed productivity gains.”
Microsoft provided a unique example of how the law does not seem to differentiate based on the level of risk posed by particular AI systems.
The AI systems used to approve personal mortgages and process sensitive financial information will be considered the same as those used to use public data to optimize package delivery routes .
Industry Minister François-Philippe Champagne has provided some information about the amendments the government plans to introduce to the bill to ensure it is up to date.
But despite those additional details, companies said the bill’s definitions are still too vague.
Amanda Craig, senior director of public policy at Microsoft’s Responsible AI division, said not distinguishing between the two “spreads Canadian companies’ time, money, talent and resources thinly, making them more vulnerable to limited resources.” “This could mean that companies are not focusing enough on the highest risks.” . ”
Bill C-27 was introduced in 2022 to target what are called “high-impact” AI systems.
However, it was only after this bill was first introduced that generative AI systems such as ChatGPT, which can create text, images, and videos, became widely available to the public.
The Liberals have now announced that they will amend the bill to introduce new rules, including requiring companies using these systems to ensure that the content they create is generated by AI. This also includes a requirement to take measures to enable identification.
Earlier this week, AI’s “godfather” Yoshua Bengio told the same committee that Ottawa should immediately implement the law, even if it’s not perfect.
Bengio, scientific director of Mira, the Quebec AI Institute, said “superhuman” intelligences as smart as humans could emerge within just a few years.
He said advanced systems could be used for cyber-attacks and the law needed to pre-empt that risk.
AI already poses risks. Bengio said deepfake videos, which are generated to make it look like a real person is doing something they’re not, could be used to spread disinformation.