The principal of Donich Law draws a parallel to how one’s digital movements online can be tracked through the individual’s IP address with the help of internet service providers. Widespread use of facial recognition technology, he fears, may allow the physical tracking of people as is sometimes done through the vast network of CCTV (closed-circuit television) cameras that allow video surveillance.
“As data becomes cheaper and cheaper to store and as technology continues to get better, the question then is: How long does the data get stored for? Do we just have endless supply of recorded human history, which can later be used to identify someone at a scene or be used as evidence to convict somebody?” he says.
The boundary between the application of that emerging technology and where it intersects with individual rights is a topic of interest for the federal privacy commissioner.
Last summer, media reports said Cadillac Fairview had embedded facial recognition technology into its digital directories in some Canadian malls as a means of monitoring mall traffic and getting a handle on the gender and approximate age of its shoppers.
In August, federal Privacy Commissioner Daniel Therrien announced the department would launch an investigation into the use of the technology, along with the Information and Privacy Commissioner of Alberta. The commercial landlord then announced it was suspending the practice pending the outcome of the investigations.
“It’s one of these situations we see a lot where technology gets out in front of both the regulatory response and people’s understanding” of what could be a potential privacy problem, says Toronto privacy lawyer Mark Hayes, founding partner at Hayes eLaw LLP. “If you’re doing any analysis by using some kind of biometric information, which facial recognition is, you’re going to be potentially offside on the privacy side.”
Facial recognition systems involve the use of a camera to capture an image, an algorithm to create a faceprint, a database of stored images and an algorithm to compare the captured image to the database of images to classify the face into categories such as gender or age. It can also be used for verification or identification purposes, says Hayes.
Part of the problem, he says, is that the technology continues to be developed and perfected, meaning current iterations aren’t particularly reliable.
But when facial recognition software does get to the point that it can accurately identify and track people, Hayes says he is concerned that it may reduce the individual’s privacy in a public place and in semi-public places, such as malls.
“[In] Europe, they’re far ahead of us in terms of recognizing that these kinds of systems could be potentially harmful. They have a lot of regulation around them,” says Hayes.
He says identifying issues involves understanding how facial recognition technology is used. He says people are often not aware of the existence of surveillance cameras or CCTVs, for instance, but they’ve become almost ubiquitous, often providing a video trail of an individual’s movements.
Add facial recognition software to those cameras, he says, and that can produce not just a trail of information about a person’s actions but also of that person’s identity.
“I think we have to take very seriously that there is a lot of this technology out there and we don’t necessarily know when it’s being used unless somebody inside the organization reports it [is being used] or it’s being used against someone [and there’s a complaint],” he says. He says “some additional regulation or guidelines coming from the provincial or federal governments [regarding] when facial recognition software can or can’t be used” would be helpful.
The public, he adds, needs to learn more about what it is and what it can do as do companies and organizations considering using this technology so that they understand the circumstances under which it’s appropriate to use facial recognition technology.
The Personal Information Protection and Electronic Documents Act and the Privacy Act lay out how technology might be used and its limitations in both the public and private sector, says Chantal Bernier, who served as the interim privacy commissioner of Canada from 2013 to 2014 and is now counsel with Dentons Canada LLP in Ottawa.
But, she adds, federal guidelines specific to facial recognition technology could offer some clear direction.
Police and other organizations run by the state must demonstrate that using facial recognition software is necessary, such as at the border or in crowd management, she says. Then its use must be proportionate, where there’s no more invasion of privacy than is necessary for public safety, she says. It must also prove it has no choice but to employ the technology, she adds.
Bernier points to Passport Canada’s facial recognition project, which relies upon the technology to ensure the applicants do indeed correspond with the identification they put forward as a means to prevent against fraud.
She says a privacy impact assessment determined there was a basis for legitimate use of facial recognition technology but that the organization would have to ensure it minimizes risk that certain individuals could be disproportionately affected. It also recommended that the database be encrypted.
But she points to the 2011 Vancouver riots after the Canucks lost the Stanley Cup as an example of where it was determined that the use of the technology would be inappropriate. British Columbia’s insurance corporation offered its facial recognition database to help the RCMP identify rioters during the hockey riots. But the province’s privacy commissioner found that use inconsistent with the original purpose of its use of that technology, which was to ensure that there was security in B.C. drivers licences and that it would be a fraud protection tool, not a tool to have the RCMP chase down rioters during the Stanley Cup final.
“The pivotal notion is the one of necessity and minimization of use to serve that demonstrated necessity,” says Bernier.
In the private sector, facial recognition technology has been used where consent has been provided and for security. In casinos, for instance, she says, it’s been used for individuals with gambling problems who have opted to be excluded from the facility.
It is considered a legitimate use because it’s based on consent and is privacy protected through biometric encryption, which can only be unlocked if the individual is present.
“This is a construct where the individuals consent to facial recognition being used to help, in this case, curb gambling addiction,” she says.
It can also be used as a security tool at entry and exit points at highly secure premises.
But, she says, it’s loaded with privacy risks because of its high capacity to collect personal information and it comes with dangers of accidental disclosure, inaccuracy, misuse and unlawful disclosure. Those privacy risks need to be mitigated for it to be lawful, she says.