Language selector

OHRC comments on IPC draft privacy guidance on facial recognition for police agencies

Page controls

November 19, 2021

Page content

 

In accordance with the instructions governing the consultation process, the Ontario Human Rights Commission (OHRC) is providing responses to a number of questions posed by the Information and Privacy Commission (IPC). Also, these documents offer recent OHRC perspectives on principles and core considerations related to the use of artificial intelligence (AI), including facial recognition (FR) technology:

Submission on Ontario’s Trustworthy Artificial Intelligence (AI) Framework

Submission on TPSB Use of Artificial Intelligence Technologies Policy

These comments do not repeat points raised in these submissions, but the concerns they address (on the potential for AI to intensify existing racial disparities flowing from police practices) do inform the way the OHRC has read the draft guidance.

As well, crucially, this response is not an endorsement of FR by the OHRC. In line with Canadian experts,[1] the European Parliament[2] and other authoritative observers, we recognize there is a compelling case for placing a moratorium on FR until a broad range of concerns – legislative gaps, human rights, privacy, etc. – have been comprehensively addressed.

 

Will this guidance have the intended effect of helping to ensure police agencies’ use of FR is lawful and appropriately mitigates privacy risks? If you don’t believe it will, why?

The guidance has been thoughtfully formulated and is comprehensive to the point where it accounts for a great many eventualities associated with police use of FR. Also, it does a solid job of outlining the risks to certain Code-protected groups that can follow from particular uses of FR.

Nonetheless, the OHRC does have concerns about paragraph 71, on regular reviews of program effectiveness, which in part states:

Reviews should assess the extent to which program activities are achieving the goals of the initiative, using demonstrable criteria (for example, number of arrests or convictions resulting from the program, etc.).

Given IPC recognition that producing false positives is inextricably linked to threshold settings (paragraphs 75 and 76), it seems clear that program effectiveness could be (problematically) demonstrated in contexts where police set low thresholds, generate substantial false positives and make erroneous arrests on those bases. We therefore caution against including arrests (and/or convictions) as a measure of FR effectiveness.

 

Is police use of FR appropriately regulated in Canada under existing law? If not, what are your concerns about the way police use of FR is currently regulated, and what changes should be made to the current legal framework?

As its Submission on Ontario’s Trustworthy Artificial Intelligence (AI) Framework makes clear, the OHRC holds the view that FR is not appropriately regulated under existing law. Our perspective is in close accord with that of various subject-matter experts. For example, Yuan Stevens of Ryerson University and Sonja Solomun of McGill University have observed:

In Canada, it is currently possible to collect and share facial images for identification purposes without consent, and without adequate legal procedures, including the right to challenge decisions made with this technology. This poses a tremendous risk of mistaken identification or arrests made through the use of facial recognition systems. Despite these harms, Canadian privacy law - meant to guard against this very kind of mass surveillance – currently lacks any real enforcement power and adequate safeguards to protect facial information. Unlike protections for the collection of other kinds of biometric information such as DNA or fingerprint data, Canada also lacks clear guidelines and consent mechanisms for how our facial information, which is highly sensitive and vulnerable to misuse, can or should be used.

The Privacy Act, Canada's privacy legislation for the federal government, does not explicitly include facial and biometric information as subsets of personal information worthy of special protection. The Act consequently fails to provide adequate safeguards against the significant risks associated with collecting, using and disclosing some of our most sensitive personal information: our faces.[3]

Further, if we consider information in the U.K. Court of Appeals decision in the case of Edward Bridges, Canada lacks bodies, offices and codes that are functionally equivalent to (1) the Oversight and Advisory Board, created by the Secretary of the State, which coordinates “consideration of the use of facial images and AFR technology by law enforcement authorities” (paragraph 5); (2) the Surveillance Camera Commissioner whose “responsibilities include, in particular, regulating the use of surveillance cameras and their use in conjunction with AFR technology” (paragraph 6); or (3) the Surveillance Camera Code of Practice (paragraph 6).

While the U.K. regulatory framework is undoubtedly characterized by key limitations (for example, paragraphs 91 and 94 on allowances for overly broad police discretion), it appears to be more robust than what currently exists in Canada and can therefore serve, at least provisionally, as a decent model for our national setting.

A suitable legal framework should also be informed by what the European Parliament calls the necessity “to create a clear and fair regime for assigning legal responsibility and liability for the potential adverse consequences produced by these advanced digital technologies” (pgs. 9–10, para. 13).

 

Should police use of FR, including the collection of faceprints, be limited to a defined set of purposes (such as serious crimes or humanitarian reasons, e.g. missing persons)? Should they be able to use or retain faceprints beyond those of individuals who have been arrested or convicted?

The European Parliament draws appropriate attention to the dangers of “function creep” (pg. 9, para. 11) as it applies to FR. As the Bridges case shows, the ambit of actual and potential FR use is quite broad:

The watchlists used in the deployments in issue in this case have included (1) persons wanted on warrants, (2) individuals who are unlawfully at large (having escaped from lawful custody), (3) persons suspected of having committed crimes, (4) persons who may be in need of protection (e.g. missing persons), (5) individuals whose presence at a particular event causes particular concern, (6) persons simply of possible interest to SWP for intelligence purposes and (7) vulnerable persons.

The European Parliament’s call for a moratorium on FR is a qualified moratorium with an exception made for strict use “for the purpose of identification of victims of crime.” This appears to be a defensible exception, and consideration should also be given to an exception for attempts to solve major crimes – when other means are verifiably inadequate – as evidenced by the methods used to capture the individuals who killed Nnamdi Obga in Toronto in March 2018.

 

Are there any other important policy issues that should be addressed in relation to police use of FR?

The draft guidance refers to “pre-enrolled faces” which are often photos in mugshot databases. The scope of these databases is quite striking, particularly in major police organizations. For example, a May 2019 Toronto Star article revealed that the Toronto police have an “internal database of approximately 1.5 million mugshots.” More broadly, across Canada during 2019/2020, there were judicial decisions on 977,227 charges (Criminal Code without traffic). Of these, a relatively small percentage – 36% – resulted in findings of guilt. The vast majority of the remainder were stayed or withdrawn. And although police organizations across Canada (e.g. in York Region, Toronto, Vancouver, etc.) make allowances for the destruction of mugshots, the destruction criteria, processes and fees likely create barriers for marginalized individuals who seek to have their photos removed from police databases.

Considering this empirical landscape in conjunction with racially disproportionate patterns of arrests and charges[4] – substantially directed toward members of Indigenous and Black communities – raises the risk of FR use contributing to reproducing and exacerbating existing disparities.

The European Parliament calls for “transparency on…source data” (pg. 11, para. 17), such as police databases that are the basis for forming watch lists. At present, however, there seems to be no transparency on the racial composition of police mugshot databases; nor do we have a sense of how many people in the databases have actually been convicted of a crime. At minimum, meeting the transparency standard the EU articulated would entail robust responses to these outstanding questions.

 

 

[1] “Saving Face: Canadian law lags behind technology,” Yuan Stevens and Sonja Solomun, Ottawa Citizen, March 2, 2021.

[2] The European Parliament “Calls...for a moratorium on the deployment of facial recognition systems for law enforcement purposes that have the function of identification, unless strictly used for the purpose of identification of victims of crime, until the technical standards can be considered fully fundamental rights compliant, results derived are non-biased and non-discriminatory, the legal framework provides strict safeguards against misuse and strict democratic control and oversight, and there is empirical evidence of the necessity and proportionality for the deployment of such technologies; notes that where the above criteria are not fulfilled, the systems should not be used or deployed…” (pg. 14, para. 27).

[3] “Saving Face: Canadian law lags behind technology,” Yuan Stevens and Sonja Solomun, Ottawa Citizen, March 2, 2021.

[4] For example, the OHRC report A Disparate Impact features multiple modes of data analysis, pertaining to Toronto, which support “the argument that, due to racial bias, Black people are more likely than White people to face low-quality charges with a low probability of conviction” (pg. 73).