Urgent action needed on facial recognition, ICLMG tells Privacy and Ethics committee

On March 24, 2022, ICLMG’s national coordinator Tim McSorley appeared before the House of Commons Standing Committee on Access to Information, Privacy and Ethics for the committee’s study on the use of facial recognition technology.

Above you can watch our opening statement to the committee; you can watch the full committee hearing here.

Below is the text of our opening statement.

To find out more and to call on the federal government to take action to protect us from facial recognition surveillance, click here.


Thank you for inviting me to speak today on behalf of the International Civil Liberties Monitoring Group, a coalition of 45 Canadian civil society organizations dedicated to protecting civil liberties in Canada and internationally in the context of Canada’s anti-terrorism and national security activities.

Given our mandate, our particular interest in facial recognition technology is its use by law enforcement and intelligence agencies, particularly at the federal level.

We have documented the rapid and ongoing increase of state surveillance in Canada and internationally over the past two decades. These surveillance activities pose significant risks to and have violated the rights of people in Canada and around the world. Facial recognition technology is of particular concern, given the incredible privacy risks that it poses and its combination of both biometric and algorithmic surveillance.

Our coalition has identified three reasons, in particular, that give rise to concern.

First, studies have shown some of the most widely used facial recognition technology is based on algorithms that are biased and inaccurate. This is especially true for facial images of people of colour, who already face heightened levels of surveillance and profiling by law enforcement and intelligence agencies in Canada.

This is particularly concerning in regard to national security and anti-terrorism, where there is already a documented history of systemic racism and racial profiling. Inaccurate or biased technology only serves to reinforce and worsen this problem, running the risk of individuals being falsely associated with terrorism and national security risks. As many of you are aware, the stigma of even an allegation in this area can have deep and life-long impacts on the person accused.

Second, facial recognition allows for mass, indiscriminate and warrantless surveillance

Even if the significant problems of bias and accuracy were somehow resolved, facial recognition surveillance systems would continue to subject members of the public to intrusive and indiscriminate surveillance. This is true whether it is used to monitor travelers at an airport, individuals walking through a public square, or activists at a protest.

While it is mandatory for law enforcement to seek out judicial authorization to surveil individuals either online or in public places, there are gaps in current legislation as to whether this applies to surveillance or de-anonymization via facial recognition technology. These gaps can subject all passerby to unjustified mass surveillance in the hopes of being able to identify a single person of interest, either in real-time or after the fact.

Third: there is a lack of regulation of the technology and a lack of transparency and accountability from law enforcement and intelligence agencies

The current legal framework for governing facial recognition technology is wholly inadequate. The patchwork of privacy rules at the provincial, territorial, and federal levels do not ensure law enforcement use facial recognition technology in a way that respects fundamental rights. Further, a lack of transparency and accountability means that such technology is being adopted without public knowledge, let alone public debate or independent oversight.

Clear examples of this have been revealed over the past two years.

First, the lack of regulation allowed the RCMP to use Clearview AI facial recognition technology for months without the public’s knowledge, and to then lie about it before being forced to admit the truth. Moreover, we now know that the RCMP has used one form of facial recognition or another for the past 20 years, without any public acknowledgement, debate, or clear oversight. The Privacy Commissioner of Canada found that the RCMP’s use of Clearview AI was unlawful, but the RCMP has rejected that finding, arguing they cannot be held responsible for the lawfulness of services provided by third parties. This essentially allows them to continue contracting with other services that violate Canadian law.

Lesser-known is that the RCMP also contracted the use of a US-based private “terrorist facial recognition system” known as “IntelCentre.” This company claims to offer access to facial recognition tools and a database of more than 700,000 images of people associate with “terrorism.” According to the company, these images are acquired from various sources online, including social media, just like Clearview AI. Also, like Clearview AI, it is unclear how these images are verified, or the legal justification for collecting these images. Even worse than Clearview AI, though, is that this system comes with the added stigma of being allegedly linked to terrorism. Worsening the situation, we have no idea how the RCMP used this tool, let alone the force’s legal basis for using it. It is clear, though, that it could have devastating impacts on someone who is falsely accused.

In another example, the CBSA ran a pilot project using real-time facial recognition surveillance at Toronto’s Pearson Airport for six months in 2016, with little to no public warning beyond a vague notice posted to their website. In all, nearly 3 million travellers had their faces scanned against a database of 5,000 images.

Finally, CSIS has refused to confirm whether they use facial recognition technology in their work, stating they have no obligation to do so. While intelligence agencies may not be able to go into the specifics of ongoing operations, there is no reason why they should not engage in a fulsome, public debate about the use of such controversial technology.

Given all these concerns, we would make three main recommendations:

  1. That the federal government ban the use of facial recognition surveillance immediately, and undertake consultation on the use and regulation of facial recognition technology in general
  2. That based on these consultations the government undertake reforms to both private and public sector privacy laws to address gaps in FR and other biometric surveillance
  3. That the Privacy Commissioner be granted greater enforcement powers, both with regard to public sector and private sector violations of Canadian privacy laws.

Since you’re here…

… we have a small favour to ask. Here at ICLMG, we are working very hard to protect and promote human rights and civil liberties in the context of the so-called “war on terror” in Canada. We do not receive any financial support from any federal, provincial or municipal governments or political parties. You can become our patron on Patreon and get rewards in exchange for your support. You can give as little as $1/month (that’s only $12/year!) and you can unsubscribe at any time. Any donations will go a long way to support our work.panel-54141172-image-6fa93d06d6081076-320-320You can also make a one-time donation or donate monthly via Paypal by clicking on the button below. On the fence about giving? Check out our Achievements and Gains since we were created in 2002. Thank you for your generosity!