Delayed airline passengers, disgruntled phone customers and even hungry people craving pizza are increasingly pleading with private companies to have artificial intelligence answer their calls.
Soon, Canadians who need to contact the federal government may also be speaking with an official aided by a non-human assistant.
Ottawa is working on a strategy to use more AI in federal government work, and while it’s too early to say exactly what that will look like, chatbots are one possibility.
The government’s chief data officer, Stephen Burt, said private sector call centres are using generative AI chatbots to search internal data and help employees find faster and better answers when customers call.
“I can imagine a lot of similar applications in the Canadian government context in terms of the services we provide to clients, from EI and superannuation to immigration,” he said in an interview.
Civil servants could also use AI to sort through vast amounts of government data, he says. At the Treasury Board of Canada alone, staff are responsible for government finances, employment and the technology civil servants use.
“There are many words in many pages of documents and it’s difficult even for people inside the government to understand what is most appropriate in a given situation,” Burt said.
The federal government is developing its AI strategy over the next few months, aiming to publish it in March of next year. The plan calls for departments and agencies to be encouraged to experiment openly so they can “see what works and what doesn’t.”
“We can’t do it all at once and we don’t know yet what the future holds. [best-use] “There are cases,” Burt said.
A new report warns the U.S. government that artificial intelligence labs could pose an extinction-level threat to humanity if they lose control of superhuman AI systems. Jeremy Harris, CEO of Gladstone AI and a co-author of the report, appeared on Power & Politics to discuss the dangers of rapidly advancing AI systems.
It’s too early to talk about red lines when it comes to what won’t be allowed, but “there will definitely be areas where we need to be more careful,” he said.
Generative AI applications can generate text and images based on vast amounts of input data.
Legislation needs review: experts
Federal government agencies are already using AI, and Joanna Redden, an associate professor at Western University in London, Ontario, has compiled a database of hundreds of examples of how the Canadian government is using AI.
This has a wide range of uses, including predicting the outcome of tax lawsuits, sorting out temporary visa applications, tracking invasive plants and detecting whales from aerial imagery.
In the European Union, AI laws ban certain uses, including the indiscriminate scraping of images for facial recognition, the use of emotion recognition systems in the workplace and schools, social scoring and some predictive policing, he said.
Speaking at an event to launch the strategy in May, Treasury Board chair Anita Anand said generative AI would “typically not be used” when it comes to sensitive matters, such as information privately available only to ministers.
University of Ottawa law professor Teresa Scassa said privacy laws covering government activities need to be updated.
Federal privacy law “is not adapted to the information society, much less the context of AI,” she said.
The Montreal professor and computer scientist internationally known as the “godfather” of artificial intelligence talks about his biggest AI-related concerns for 2024.
Questions may also arise about the use of generative AI and the risk that it may capture personal or sensitive information.
“Suddenly, someone might start using Generation AI to respond to emails. How do you handle that? What information is going into the system and who is going to check it?”
Scassa also questioned whether there would be any redress if the government’s chatbot provided false information.
As Canada’s largest employer, the federal government should consider adopting artificial intelligence, said Fenwick McKelvey, an assistant professor of information and communications technology policy at Montreal’s Concordia University.
McKelvey suggested governments could use chatbots to “help users understand and use complex services” as well as help make government documents more accessible and readable.
One example is completing complex tax forms.
Redden had to compile the government’s database on AI through news reports, congressional documents and freedom of information requests.
She argues that governments should have better visibility into their own use of AI and be transparent about it, but Ottawa seems unlikely to change its approach under its new AI strategy.