The Canada Border Services Agency plans to introduce an app that uses facial recognition technology to track people who have received deportation orders.
The mobile reporting app would use biometric authentication to verify a person’s identity and record location data when they use the app to check in. The CBSA had proposed such an app as early as 2021, according to documents obtained through freedom of information.
A spokesperson confirmed that the app, called “ReportIn,” will be released this fall.
Experts have raised numerous concerns, questioning the validity of user consent and the potential secrecy of how the technology makes decisions.
The 2021 document states that each year, about 2,000 people served with deportation orders don’t leave the country, forcing the CBSA to “devote significant resources to investigating, locating and, in some cases, detaining these clients.”
The agency pitched the smartphone app as an “ideal solution.”
By receiving regular updates on an individual’s “place of residence, employment, family situation etc. through the app, the CBSA will have relevant information that can be used to contact and monitor customers for early indications of non-compliance,” it said.
“Furthermore, automation increases the likelihood that clients will feel involved and perceive the level of visibility the CBSA has into their case.”
The document further states that “if a client fails to appear for removal proceedings, the information collected through the app will provide a good investigative lead to locate the client.”
According to an algorithmic impact assessment for the project, which has not yet been posted on the federal government’s website, the biometric voice technology the CBSA intended to use is being phased out due to “technical shortcomings” and the ReportIn app has been developed to replace it.
According to the agency, an individual’s “facial biometric and location information is provided by sensors and GPS on their mobile device or smartphone, recorded through the ReportIn app, and then transmitted to CBSA’s back-end systems.”
Once a user submits a photo, a “face comparison algorithm” generates a similarity score with a reference photo.
If the system is unable to make a facial match, it will begin the process of officers investigating the incident.
“An individual’s location will be collected each time they report and also if they fail to comply with the terms,” the document states. Individuals will not be “continuously tracked,” the document states.
The app uses technology from Amazon Web Services, a choice that caught the attention of Brenda McPhail, executive education director of McMaster University’s Public Policy in a Digital Society program.
She said that while many facial recognition companies submit their algorithms to the National Institute of Standards and Technology for testing, Amazon has never done so voluntarily.
An Amazon Web Services spokesperson said the company’s Amazon Rekognition technology has been “thoroughly tested, including by third-party testing by Credo AI and iBeta Quality Assurance, which specialize in responsible AI.”
The spokesperson added that Amazon Rekognition is “a large-scale cloud-based system and therefore cannot be downloaded as described in the NIST participation guidance.”
“As such, our Rekognition Face Liveness has been submitted to iBeta Lab, an independent testing lab accredited by the Institute, for testing against industry standards,” the spokesperson said.
CBSA documents state the algorithms used are trade secrets. In a situation that could have life-changing consequences, McPhail questioned whether it was appropriate to use a tool that was “protected by trade or proprietary secrets and that denies people the right to understand how decisions about them are actually being made.”
Kristen Thomason, associate professor and chair of law, robotics and sociology at the University of Windsor, said the reference to trade secrets is a signal that there may be legal obstacles blocking information about the system.
She explained that there had been concerns for years that people affected by errors in the system could be legally barred from obtaining further information because of intellectual property protections.
CBSA spokesperson Maria Ladouceur said the agency developed the smartphone app “to allow foreign nationals and permanent residents who are subject to immigration enforcement conditions to report them without having to come to a CBSA office in person.”
He said the department had “worked in close consultation” with the Office of the Privacy Commissioner on the app. “Registration to ReportIn is voluntary and users must consent to both the use of the app and the use of their likeness for identity verification.”
Petra Molnar, deputy director of the Refugee Law Lab at York University, said there was a power imbalance between the agencies introducing the app and the people who receive it.
“Can people really, truly consent in situations where there is this huge power differential?”
Ladouceur said that if people don’t agree to participate, they can report directly as an alternative.
Thomason also warned that facial recognition technology is at risk of error, and that this risk is higher for people of color and those with darker skin.
“It’s extremely worrying that there is essentially no discussion of the human rights impacts in the document,” Molnar said.
A CBSA spokesperson said Credo AI tested its software for bias against demographic groups and found it had a 99.9 percent facial match rate across six different demographic groups, adding that the app will be “continuously tested after launch to assess accuracy and performance.”
While the final decision would be made by a human and a panel of reviewers would oversee all submissions, experts said people tend to trust decisions made by technology.
“There’s a fairly well-established psychological tendency for people to defer to the expertise of computer systems,” Thomason said, adding that computer systems are perceived to be less biased and more accurate.