Article body

Introduction

Facial recognition technology (FRT) is an artificial intelligence (AI)-based biometric technology that utilizes computer vision to analyze facial images and identify individuals by their unique facial features.[1] This sophisticated AI technology uses advanced computer algorithms to generate a biometric template from a facial image. The biometric template contains unique facial characteristics represented by dots, which can be used to match identical or similar images in a database for identification purposes. The biometric template is often likened to a unique facial signature for each individual.[2]

A significant rise in the deployment of AI-based FRT has occurred in recent years across the public and private sectors of Canadian society. Within the public sector, its application encompasses law enforcement in criminal and immigration contexts, among many others. In the private sector, it has been used for tasks such as exam proctoring in educational settings, fraud prevention in the retail industry, unlocking mobile devices, sorting and tagging of digital photos, and more. The widespread use of AI facial recognition in both the public and private sectors has generated concerns regarding its potential to perpetuate and reflect historical racial biases and injustices. The emergence of terms like “the new Jim Crow”[3] and “the new Jim Code”[4] draws a parallel between the racial inequalities of the post-US Civil War Jim Crow era and the racial biases present in modern AI technologies. These comparisons underscore the need for a critical examination of how AI technologies, including FRT, might replicate or exacerbate systemic racial inequities and injustices of the past.

This research paper seeks to examine critical issues arising from the adoption and use of FRT by the public sector, particularly within the framework of immigration enforcement in the Canadian immigration system. It delves into recent Federal Court of Canada litigation relating to the use of the technology in refugee revocation proceedings by agencies of the Canadian government.[5] By delving into these legal cases, the paper will explore the implications of FRT on the fairness and integrity of immigration processes, highlighting the broader ethical and legal issues associated with its use in administrative processes.

The paper begins with a concise overview of the Canadian immigration system and the administrative law principles applicable to its decision-making process. This is followed by an examination of the history of integrating AI technologies into the immigration process more broadly. Focusing specifically on AI-based FRT, the paper will then explore the issues of racial bias associated with its use and discuss why addressing these issues is crucial for ensuring fairness in the Canadian immigration process. This discussion will lead to a critical analysis of Federal Court litigation relating to the use of FRT in refugee status revocation, further spotlighting the evidence of racial bias in the technology’s deployment within the immigration system.

The paper will then proceed to develop the parallels between racial bias evident in contemporary AI-based FRT (the “new” Jim Crow) and racial bias of the past (the “old” Jim Crow). By focusing on the Canadian immigration context, the paper seeks to uncover the subtle, yet profound ways in which AI-based FRT, despite its purported neutrality and objectivity, can reinforce racial biases of the past. Through a comprehensive analysis of current practices, judicial decisions, and the technology’s deployment, this paper aims to contribute to the ongoing dialogue about technology and race. It challenges the assumption that technological advancements are inherently equitable, urging a re-evaluation of how these tools are designed, developed, and deployed, especially in sensitive areas such as refugee status revocation, where the stakes for fairness and equity are particularly high.

I. Canadian Immigration System—The Legal Framework

The primary pieces of legislation governing immigration in Canada are the Immigration and Refugee Protection Act (IRPA)[6] and the Immigration and Refugee Protection Regulation (IRPR).[7] Other operational manuals and documents also provide detailed policy and procedural guidance for the interpretation of the major legislation, thus shaping the interpretation and application of the IRPA and IRPR.

The administration and enforcement of immigration regulation in Canada is overseen mainly by two federal departments/agencies: Immigration, Refugee and Citizenship Canada (IRCC), and the Canada Border Services Agency (CBSA).[8] While IRCC is responsible for processing of immigration and refugee applications allowing foreign nationals to enter or remain in Canada, the CBSA is responsible for admitting foreign nationals into Canada (at the port of entry) and enforcing their removal when their stay in Canada has ceased to be valid or they have become inadmissible.

Aside from IRCC and the CBSA, other administrative tribunals are also charged with administrative decision-making relating to immigration matters. The Immigration and Refugee Board of Canada (IRB) comprises of four administrative tribunals: the Refugee Protection Division (RPD), the Refugee Appeals Division (RAD), the Immigration Division (ID) and the Immigration Appeals Division (IAD). Generally, immigration decisions made by the IRCC, CBSA officers, and the appellate arms of the IRB tribunals, are subject to judicial review by the Federal Court of Canada, and in some specific cases,[9] they are subject to further appeal to the Federal Court of Appeal and the Supreme Court of Canada.

Immigration decisions by IRCC and CBSA officers and the IRB tribunals fall within the context of administrative decision-making processes.[10] Hence, these decisions must adhere to the principles of administrative law, notably the principle of procedural fairness, which is fundamental to the Canadian legal framework and applicable across a variety of legal and administrative proceedings, including those in immigration.[11] In the administrative context, the principle requires that decisions made by administrative officers must be based on evidence, be free from bias, and follow the principles of justice and equity. Such decisions should be made transparently and be logically connected to the evidence presented.[12] Procedural fairness also encompasses the right of individuals to be informed about the decisions made regarding their case, including being provided with reasons for decisions (especially negative decisions), and how the decision-maker arrived at the decision. The reasons enables the person affected by the decision to understand its basis and, if necessary, to challenge it through appeals or judicial review.[13]

Procedural fairness is crucial for ensuring that the immigration process is just, equitable, and transparent. It ensures that individuals affected by immigration decisions have clear avenues to seek redress, thereby reinforcing the integrity and trust in the system’s operations. Hence, the principle of procedural fairness becomes more crucial in the immigration context in Canada because of the wide discretion accorded to immigration decision-makers.[14] Procedural fairness helps to curtail “arbitrary, unfair, or unaccountable decision-making in situations with significant consequences for people’s lives.”[15] The degree of procedural fairness accorded to an individual increases or decreases with the impact the decision may have on the affected individual.[16] For example, the degree of procedural fairness owed to a temporary residence visa applicant will usually be lower than that owed to a refugee claimant. This difference is because the impacted rights from a failed refugee claim has more serious consequences compared to a failed temporary residence visa application, especially a failed refugee claim may raise the claimant’s risk of deportation—with significant consequences to their right to life, liberty, and personal security.[17]

Also, related to procedural fairness is the right to be heard. In the immigration context, Molnar and Gill have noted that this right requires that when a decision-maker relies on extrinsic evidence in arriving at a decision, the individual affected by the decision must be informed of such evidence and also be given the opportunity to respond accordingly.[18] This right is also implicated in situations where an immigration officer relies on an AI algorithm in arriving at a decision.[19] Hence, individuals affected by such decisions ought to be made aware of the decision-maker’s reliance on the AI tool and be given the opportunity to challenge the decision made by, or with the help of this technology.

II. Historical Perspective: How AI Has Been Integrated into Immigration Processes

In his work, Luberisse discusses how physical barriers like walls played a crucial role in deterring invasions, regulating trade, and managing migration flows – prior to the development of sophisticated border control technologies.[20] He supports this assertion with notable historical examples, such as the Great Wall of China, which was built to safeguard Chinese states from nomadic invasions, and Hadrian’s Wall in Northern England, representing the boundary of the Roman Empire.[21] These structures were more than mere defensive strategies as they also fulfilled exclusionary functions – excluding undesirable elements from defined spaces, such as territorial boundaries. These examples are illustrative of the multifaceted roles of physical barriers in the annals of history.

Over time, paper passports containing the facial image of the holder have evolved to become essential documents for countries to regulate immigration flows and verify the identities of travellers seeking to enter their territorial spaces.[22] Traditionally, this verification process involved border officers manually comparing the photo image on the passport document with the traveller’s face. This method, while straightforward, could be time-consuming and prone to human error, highlighting the need for more efficient and reliable verification techniques.[23]

With the development of pertinent technology over the late 20th and early 21st centuries, a significant shift has emerged towards the use of more sophisticated systems in immigration processes. The development of AI and machine learning algorithms offered unprecedented capabilities for data analysis, pattern recognition, and automation. Governments and immigration authorities began to see that technologies had the potential to not only transform traditional processes, but also to become useful tools that streamline the process by enabling quicker and more accurate adjudications of immigration applications, border control procedures, and identity verification.

The advent of biometric technology, including fingerprint and facial recognition, marked a crucial point in the deployment of sophisticated technologies in immigration processes. Initially used for security and verification purposes, these technologies have become increasingly central to immigration controls, aiding in identifying and tracking individuals as they seek to cross national borders, and even when they enter spaces within a sovereign state.

In Canada and the United States, we are witnessing the increasing use of AI in border and immigration systems.[24] This trend represents a significant shift towards more efficient, secure, and intelligent management of the immigration system. In the United States, the US Custom and Border Protection (CBP) has deployed AI-driven FRT across US airports and border crossings to enhance the screening process of incoming and outgoing travellers.[25] This system, which is part of the Biometric Entry-Exit Program,[26] aims to verify identities quickly and accurately, reducing wait times and increasing security by identifying individuals who may pose a security risk, or have overstayed their visas.

In Canada, the IRCC employs advanced analytics and machine learning algorithms to sift through and triage large volumes of immigration applications.[27] This application of AI helps to identify patterns that may indicate fraudulent documents or applications, thereby enhancing the vetting process and prioritizing cases that require closer human examination. IRCC has also deployed Chinook software to improve efficiency and processing times for temporary residence application.[28] As we will see later, it appears that the department has also deployed the use of AI-based FRT in identity verification of refugee claimants in Canada.[29]

Similarly, CBSA has deployed Primary Inspection Kiosks or eGates, and NEXUS kiosks across major airports in Canada.[30] These kiosks use AI-based FRT to verify the identity of persons seeking to enter Canada and expedite their customs declaration process. The process involves face verification: a one-to-one photo comparison.[31] The traveller arriving at the kiosks will have their photo taken and ePassport document scanned.[32] The photo image taken at the kiosk is then used by the FRT system to generate a unique biometric template of the individual, which is subsequently matched against the photo embedded in the chip of the traveller’s ePassport or, in the case of a NEXUS travellers, against the digital photo archived in the CBSA systems.[33] This process ensures that the two images match. Implementing this technology offers an extra layer of verification using the traveller’s facial image, thereby enhancing travel security and recognizing the traveller’s eligibility for entry into Canada. Luberisse noted that FRT systems “are revolutionizing border security propelling it into an era where identification is not just about documents but the very essence of human biology.”[34]

III. Racial Bias in AI Facial Recognition Technology

Andrejevic and Selwyn have pointed out that a recurring fault line in the historical development of FRT is its complete failure to engage with issues of race and racism.[35] That early historical trend set a negative precedent, leading to the modern incarnation of the technology, which is profoundly entangled with racial bias. According to Andrejevic and Selwyn, that trend “tended to lead white middle-aged researchers to seek out datasets populated with pictures of faces fitting the white, middle-aged profile of what they deemed to be ‘Mr Average’.[36] Further, many research studies have consistently demonstrated that while FRT exhibits a high accuracy rate in recognizing faces with lighter skin tones, it exhibits high error rates in identifying faces with darker skin tones.[37] These widely divergent accuracy rates of FRT along racial lines unquestionably bring the technology’s evident racial bias into clear focus. This disparity underscores a significant challenge in ensuring the technology’s fairness and accuracy across diverse racial demographics.

Buolamwini and Gebru’s landmark Gender Shades study exposed significant racial and gender biases within commercial facial analysis algorithms.[38] Their research made clear that the datasets used to train these systems predominantly feature White male individuals, leading to a skewed representation that affects the algorithms’ accuracy in identifying and classifying individuals by gender and skin colour.[39] The findings revealed a pronounced bias against darker-skinned females, who experienced identification error rates as high as 34.7%.[40] In contrast, lighter-skinned males had an error rate as low as 0.8%, indicating a 99.2% accuracy rate for this group.[41]

This study builds on earlier research by Klare et al., who conducted a large-scale analysis of facial recognition performance across three demographic classifications: race/ethnicity, gender, and age.[42] This analysis, which evaluated the results from three commercial facial recognition algorithms, consistently found lower accuracy rates among females and Black individuals aged 18 to 30 years.[43] Together, these studies underscore the critical need to address and rectify the biases inherent in facial recognition technologies, while shining a light on the disparities in accuracy that disproportionately affect certain demographic groups.

The U.S. National Institute for Standards and Technology (NIST) conducted a comprehensive study to evaluate the impact of race, gender, and age on the accuracy of facial recognition software.[44] This study, one of the most extensive of its kind, evaluated 189 facial recognition software systems from 99 developers, representing a significant portion of the industry. It employed two testing methods: one-to-one (1:1) photo matching (face verification) and one-to-many (1:n) photo matching (face identification). The findings revealed a higher incidence of false positives in face verification tests for West and East African faces compared to East European faces, and for East Asian faces compared to East European faces, specifically when algorithms were tested using higher-quality application photos. Additionally, the study noted that for U.S. domestic law enforcement images, American Indian faces exhibited higher false positive rates than both West and East African and East Asian faces. Moreover, it highlighted that Chinese-developed algorithms demonstrated low false positive rates for East Asian faces.[45] In face identification tests, the study observed an increased rate of false positives specifically among Black females.[46]

Similarly, the UK-based National Physical Laboratory undertook independent testing of facial recognition software utilized by two major UK police departments.[47] This testing indicated that the software’s performance was particularly poor regarding Black females.[48] This bias existed despite these police departments’ efforts to implement an Equality Impact Assessment process designed to prevent unlawful discrimination resulting from the technology’s use.[49] In a court ruling in R. (Bridges) v Chief Constable of South Wales, the Court of Appeal of England and Wales found the police department’s Equality Impact Assessment and their overall approach failed to sufficiently mitigate the risk of racial bias in the deployment of automatic FRT.[50]

Thus, most available research studies clearly suggest that facial recognition software appears to exhibit higher error rates among people of colour, the highest rate occurring among Black females. The consequences of false positives in face identification can be profound, especially in public sector applications of the technology. For instance, when an individual’s image is used to search a broader database within contexts such as immigration or criminal justice enforcement, the repercussions of inaccuracies could be critical, affecting lives and potentially leading to unjust outcomes. The potential for errors underlines the urgent need for addressing these disparities to ensure fairness and accuracy in the application of FRT.

Aside from the racial bias evident in these studies, other studies have even gone further, drawing attention to the high error rate in the technology more broadly. For example, in 2019, Manthorpe and Martin noted that 81% of persons flagged by the live FRT used by the London Metropolitan Police Service were falsely flagged as suspect—raising significant concern about the police use of the technology.[51] Even in cases where research studies have reported an overall high FRT accuracy rate, this accuracy rate may be misleading: once it is actually broken down along racial and gender lines, a different picture becomes apparent.[52] These kinds of deeper analysis will inevitably reveal the racial and gender bias imbedded in the technology. Therefore, even where the overall predictive accuracy of FRT tools may appear high, users must remember that some racial groups are disproportionately impacted by its predictive inaccuracy. This issue is evident from highly publicized cases of false arrests arising from false positive matches by the software.

In the United States, there have been six documented instances of false arrests attributed to the use of FRT by police departments.[53] Remarkably, all these cases involved individuals who are Black. Notably, half of these incidents occurred in Detroit. This city is known for Project Green Light, a program that extensively employs CCTV cameras and FRT for public surveillance. Given that Detroit’s population is over 77.8% Black,[54] these incidents raise significant concerns about the appropriateness of deploying a technology proven to have its highest error rates among this demographic group.[55] This pattern emphasizes the critical need to re-evaluate the use of FRT by law enforcement, particularly in areas with high concentrations of populations most susceptible to its inaccuracies.

IV. Deportation 2.0: AI Facial Recognition Technology in the Canadian Immigration System

Canada has been at the forefront of integrating AI technologies into its immigration and border control systems. This AI technology adoption has often been covert, with the public only learning about the use of specific AI technologies in immigration processes either through litigation[56] or via access to information requests made by private citizens. Judicial reviews from the Federal Court have shed light on the Government of Canada’s use of FRT in immigration enforcement, illuminating numerous issues and concerns in this process. These concerns include racial bias, procedural fairness, and transparency, reflecting the complexities and challenges of integrating AI into sensitive governmental operations. This insight underscores the need for more transparency and scrutiny with respect to the deployment of AI technologies in public sector domains, particularly in areas as critical as immigration and border security.

The general tendency of AI tools to exhibit racial bias has been referred to as “algorithmic racism,” defined in a previous work as “systemic, race-based bias arising from the use of AI-powered tools in ... decision making resulting in unfair outcomes to individuals from a particular segment of the society distinguished by race.”[57]

Principles of administrative law require administrative decision-making processes to be free of bias, including racial bias. This principle assumes even greater importance when AI tools are integrated into such decision-making. As Calvin Lawrence has pointed out, if AI tools are designed without sufficiently addressing existing biases and inequities, the biases embedded within the algorithms can compromise the integrity of predictive decisions, leading to subtle forms of discrimination that may not be immediately apparent.[58]

Hence, where an AI tool that has been proven to exhibit racial bias is used in an administrative decision-making process, a pervasive risk exists that the decision arising from that process will be tainted by bias—unless of course the decision-maker can account for the bias.[59] Regarding the use of FRT in the Canadian immigration system, a review of Federal Court litigation related to its use suggests not only racial bias that decision-makers could not account for, but also a clear lack of transparency and procedural fairness, further evidencing systemic racism.

To understand the depth of these issues, it is instructive to examine specific Federal Court litigation, beginning with Barre v. Canada (Citizenship and Immigration).[60] This case, among others, highlights the critical concerns about the use of such technology and its impact on fairness and equality in administrative decision-making processes, particularly in sensitive areas like immigration, where the stakes are high for the individuals involved. Barre was the first Canadian litigation that alerted the Canadian public to the use of FRT in the context of immigration enforcement. The case raised allegations regarding its usage in refugee status revocation by the Minister of Public Safety and Emergency Preparedness, represented by two government departments, IRCC and CBSA.

The applicants were two Somali women who had previously made successful refugee claims in Canada. Subsequently, the Minister of Public Safety and Emergency Preparedness successfully brought an application for the revocation of their refugee status before the Refugee Protection Division (RPD). The minister alleged that the women had misrepresented their identity as Somali nationals when, in fact, they were Kenyan citizens. It appeared that IRCC had matched the facial photos of the women with those of two different individuals who were Kenyan nationals and who had previously entered Canada with Kenyan passports. While the RPD accepted evidence of the photo match, it refused the women’s request to compel the minister to disclose information about the technology used in the photo comparison.

At the judicial review of the RPD decision at the Federal Court, the applicants asserted that the minister used the controversial Clearview AI-based FRT in the photo-matching process.[61] Thus, the use of an FRT tool in the administrative judicial decisions that led to the refugee status revocation became a major issue in the litigation. This issue was critical for several reasons: First, FRT is known for its high error rate in identifying Black women, a racial and gender group to which these women belong, raising critical concerns about accuracy and bias in the impugned revocation decision. Second, given FRT’s high error rates and the potential for inherent bias, it is crucial to examine the measures the immigration officers took to address the inherent bias in this revocation decision. Third, the applicants’ unsuccessful efforts to obtain disclosure from the immigration authorities about the use of the technology are certainly cause for concern about procedural fairness in the decision-making process.

The judicial review of the IRCC’s revocation decision in Barre made evident a significant lack of transparency on the government’s part. The minister attempted, albeit unsuccessfully, to evade the issue of disclosing the technology used in photo matching by invoking Section 22(2) of the Privacy Act.[62] The minister argued that the provision “allows law enforcement agencies to protect the details of [their] investigation.”[63] Essentially, the minister argued that the technology employed for photo matching was an “investigative technique” and therefore exempt from disclosure. Beyond asserting the use of FRT in the photo matching, the applicants presented empirical evidence and research studies to the Federal Court, demonstrating the technology’s high error rates in identifying darker-skinned females like themselves. In its decision, the Federal Court accepted that FRT was used in the photo matching. It determined that the minister could not rely on Section 22(2) of the Privacy Act to avoid disclosing information about its application. Citing reports from the “Gender Shades” study, the court acknowledged the applicants’ characterization of FRT as an unreliable pseudoscience, one that “has consistently struggled to obtain accurate results, particularly with regard to Black women and other women of colour.”[64]

If we accept the Federal Court’s finding that FRT was used by immigration officials in the photo matching, this case raises some serious questions. First, why was the use of this technology in the decision-making process not disclosed to the applicants? Why did the minister oppose the disclosure of information relating to its usage at all stages of the proceedings? But even more critically—given the overwhelming evidence of racial and gender biases against darker-skinned females associated with FRT—why would a government department deploy such technology in an administrative decision-making process affecting individuals from racial and gender groups known to be adversely affected by FRT biases? One might be inclined to suggest that these known issues with FRT could explain the minister’s opposition to disclosure. One interpretation is that invoking the Privacy Act was an attempt by the minister to avoid scrutiny over numerous issues related to the use of the technology in government administrative decision-making. Unfortunately, due to the nature of judicial review litigation, these concerns were not, and could not have been, addressed by the Federal Court, as the matter was returned to the RPD for redetermination.

Given the issues raised in Barre, along with both the court’s decision in that case and the well-documented research works and reports highlighting the bias in FRT, it is reasonable to expect that the Canadian immigration officials would rethink and revisit their use of the technology in refugee revocation proceedings involving Black people and people of colour. Sadly however, Barre was the first but not the last such case. Shortly after the decision in Barre, many other cases began to emerge from the Federal Court. One of those cases was Abdulle v. Canada (Citizenship and Immigration).[65] The facts in Abdulle were very similar to Barre. It also involved a Somali female who made a successful refugee claim in Canada, and whose status was sought to be revoked because her face was matched to some other person of Kenyan nationality in the immigration database.[66] The outcome in Abdulle was different, though, based more on a technicality than on substantive issues.[67]

In contrast to the Barre case, where the appellant at least sought (albeit unsuccessfully) the disclosure of the technology behind the photo comparison, Abdulle did not seek disclosure at the RPD. During the Federal Court’s judicial review of the RPD’s revocation decision, the applicant posited that the minister must have used Clearview’s AI-based FRT to compare her face against millions of others in the database. The omission to seek disclosure at RPD was ultimately fatal to the case, as the applicant’s claim about the alleged use of FRT by the immigration authorities was held by the court to be speculative in the absence of any evidence. That notwithstanding, the Federal Court clearly acknowledged the weakness with FRT, stating that “the weaknesses of facial recognition software are common knowledge.”[68] Thus, that “common knowledge” would have helped the applicant’s case if they had sought disclosure of evidence to substantiate their claim relating to the use of the technology.

Although Abdulle failed on this technicality, the case further showed the lack of transparency that characterizes the questionable deployment of racially biased FRT on refugee status revocation involving Black individuals, especially Black women. Similar to Barre, the immigration authorities in Abdulle were not forthright about the use of the FRT in the photo matching. The minister denied using Clearview’s FRT and instead asserted that it used “traditional investigation techniques.”[69] This is problematic and deliberately confusing. First, the minister’s denial relates to the use of a specific brand of FRT—Clearview—as opposed to denial of use of FRT generally. Second, the minister asserted that it used “traditional investigation techniques,” a term that appears to have been deliberately coined to conceal the disclosure of the particular technology used, thereby evading the scrutiny arising from the use of a clearly racially biased tool. In response, the Federal Court noted the ambiguity of the coded phrase “traditional investigation techniques,” stating that “[w]hatever those techniques were, no inference can be drawn that they included facial recognition software in the absence of supporting evidence.”[70] Unfortunately, the supporting evidence necessary to make the inference had been deliberately withheld by the government.

Prior to Abdulle, there was the case of AB v. Canada (Citizenship and Immigration), involving the use of facial recognition evidence in refugee revocation.[71] This case was problematic in many respects. In addition to the issue of lack of transparency that has become characteristic of the current immigration authorities’ use of FRT, AB also foregrounded an issue of privacy arising from the transfer of personal information collected via FRT between various levels of government. Notably, this information transfer is conducted without the knowledge or consent of the affected individual.

The applicant in AB was a Black woman from Central Africa who had made a successful refugee claim in Canada. Many years after her successful refugee claim, she visited an Ontario Ministry of Transportation (MTO) registry office to have her photo taken as part of her driver’s licence application. Unbeknownst to her, an MTO agent used FRT to compare her photo against other photos is their database, matching her face to a different person. MTO, a provincial government ministry, covertly shared this information with IRCC,[72] which successfully brought a refugee revocation application at the RPD. During the RPD proceedings, the applicant sought to have the MTO official testify about the ministry’s use of FRT in the photo matching. IRCC successfully opposed the move.

The consistent efforts by Canadian immigration authorities to oppose the disclosure of information related to the use of FRT in immigration proceedings are very troubling, especially when this deployment involves individuals from racial and gender groups who are particularly adversely impacted by the technology’s bias. AI technology is essentially a “black box;” as such, it is only common sense that in an administrative decision-making process, which should most certainly be characterized by transparency, using such opaque technology be subject to necessary scrutiny rather than shrouded in secrecy. The principle of procedural fairness demands that individuals who are affected by administrative decisions made with the assistance of AI tools, such as FRT, should be informed about the technology’s role in key decisions that have lasting real-life consequences for them. Such disclosure is necessary to enable them to exercise their right to challenge those decisions.

AI-based FRT is far from neutral and free of bias. In fact, when it comes to accuracy rates and bias, FRT clearly ranks as the worst of all biometric technologies.[73] Its role in reinforcing systemic and historical racism within society is a topic that continues to be extensively researched and documented. The need for further research in this area is increasingly imperative. Hence, this study aims to augment the expanding body of research in this area. In the absence of rigorous oversight, FRT poses the risk of perpetuating the very forms of systemic racism that society has endeavoured to overcome. This trajectory becomes more apparent when we examine certain characteristics that FRT shares with the systemic racism of the past.

V. Facial Recognition Technology as the New Jim Crow

Jim Crow is a pejorative term derived from a popular American theatrical show and was used to stereotypically depict African Americans. Jim Crow laws were a series of state and local regulations that enforced racial segregation primarily, but not exclusively, in southern and border states of the United States from the late 19th century until the mid-20th century.[74] These laws and regulations deprived African Americans of many rights and excluded them from certain spaces.[75] Jim Crow laws were primarily rooted in the broader theme of systemic racial discrimination. They were a form of institutionalized racial discrimination that sought to maintain White supremacy and control over Black populations. The well-documented racial bias in FRT in many ways mirrors the ugly Jim Crow laws of the past. In the context of modern technology, racial bias in FRT represents a continuation of the systemic racial issues that characterized the Jim Crow era, albeit in a different form.

A. Spatial and Temporal Exclusion

One of the most evident manifestations of Jim Crow laws was the systematic exclusion of Black individuals and people of colour from specific public spaces, such as schools, transportation systems, restrooms, and restaurants.[76] Even in spaces where outright exclusion did not apply (such as in public buses and cinemas), these racial groups were often relegated to the most inferior segments within those spaces.[77] However, the ramifications of Jim Crow laws extend far beyond their immediate spatial restrictions, to encompass the temporal: they endured through time, well after they were officially repealed. These enduring and significant impacts are found in ongoing systemic inequalities, particularly within the criminal justice space, where Black people and people of colour are disproportionately over-represented. The socio-economic barriers and structures that were established during the Jim Crow era continue to hinder the full participation of these groups in societal progress, illustrating how the legacy of Jim Crow laws transcends both space and time.[78]

Similar to the exclusionary practices of the Jim Crow era, FRT has the potential to act as a modern instrument of exclusion.[79] This issue is especially true in scenarios where access to certain benefits hinges on the accurate facial identification of individuals. The technology’s demonstrated accuracy rate of over 99% in identifying White male faces suggests that individuals from this demographic are more likely to access such benefits. Conversely, individuals from racial groups that the technology struggles to accurately recognize are at a higher risk of being excluded from such benefits.

To illustrate, in Canada, both domestic and international laws recognize the right to grant refugee status to individuals fleeing persecutions from other countries. The grant of this critical status is dependent on identity verification of the claimant. However, if such verification relies on a technology notorious for its high error rates in recognizing Black individuals and people of colour, we face a grave issue. The inaccuracies in the identity verification by the technology could deprive some of these individuals of their recognition, effectively excluding them from the protections within the Canadian space, mirroring the exclusion and inequality perpetuated by Jim Crow laws. Moreover, the repercussions of this technological exclusion are long-lasting and severe. Incorrect identification that results in non-recognition could lead to deportation from Canada, exposing individuals to risks to their life, liberty, and personal security[80] in places far from Canada, underscoring the enduring and profound impact of such exclusions.

B. Perpetuation of Discrimination

A critical parallel between FRT and the Jim Crow laws resides in their capacity to perpetuate discrimination, albeit through different mechanisms. The Jim Crow laws were explicitly crafted and implemented as systemic instruments for enforcing racial segregation and discrimination. FRT, while sophisticated and modern, serves as an inadvertent but potent tool for reinforcing racial bias and discrimination. This technology, through its algorithmic biases and flawed training data, subtly embeds discrimination and racism into its operations, affecting individuals based on their race, gender, and other identities. Cathy O’Neil rightly noted that racism in technology “is powered by haphazard data gathering and spurious correlations, reinforced by institutional inequities, and polluted by confirmation bias.”[81]

While the Jim Crow laws were a manifest expression of state-sanctioned discrimination aimed at maintaining racial inequality, the biases inherent in FRT often stem from unintentional consequences related to technological design, development, and deployment. These biases are not the result of deliberate policy but rather emerge from a lack of diversity in training data, algorithmic bias, and the oversight of developers and engineers. The inadvertent nature of this discrimination, however, does not diminish the fact that both Jim Crow laws and biased facial recognition practices ultimately lead to the same end result—perpetuation of discrimination and systemic marginalization of certain racial groups.

C. Legal and Social Implications

Jim Crow laws, when they were enacted, became an integral part of the legal system, serving as exclusionary tools for enforcing discrimination and segregation. Their integration into the fabric of the society established a normative social order that carries both legal and social implications. Similarly, while FRT and its biases have not been explicitly codified into a legal framework in Canada, it is swiftly gaining a semblance of legal legitimacy through its often covert integration into government operations and public sectors, particularly in law enforcement.[82] This tacit endorsement is highly problematic, given the absence of a regulatory framework or adequate oversight to mitigate its racial biases. Indeed, it sets a kind of precedent, implying it is part of the normative legal and operational framework—despite its propensity for discriminatory outcomes like wrongful arrests and the revocation of refugee statuses for individuals from racial and gender groups where the technology has a higher propensity for bias.[83]

On the societal front, Jim Crow laws were normalized through social norms and attitudes that endorsed racial discrimination as part of the status quo, notwithstanding its inherent flaws. Similarly, FRT is slowly being accepted socially regardless of these same integral flaws.[84] This acceptance is partly due to the widespread but erroneous belief in technology’s neutrality and objectivity. Selinger and Rhee’s concept of normalization clearly demonstrates this phenomenon. They used the term “favourably disposed normalization” to depict a state in which surveillance becomes so commonplace that individuals not only accept it, but also rationalize it as beneficial.[85]

Sarah Hamid strongly opposed the social normalization of FRT, instead adopting an abolitionist stance.[86] She argued that FRT is inherently oppressive, and that using the technology, even for benevolent purposes, does not alter its nature as a tool of surveillance and control. Hamid went on to suggest that even individuals who use FRT for such benevolent purposes as unlocking their phones inadvertently contribute to the development and enhancement of this carceral technology, reinforcing its oppressive capabilities. Although Hamid’s perspective might seem extreme, suffice to state that in a society largely unaware of the racial biases embedded in the technology, the purported convenience, efficiency, and public safety benefits of FRT can overshadow its inherent flaws, especially in a North American context marked by criminal profiling of individuals from racial groups and heightened fears of immigration.[87] Thus, while Jim Crow laws expressly legalized racism in the past, FRT is now normalizing it in contemporary society, often without society realizing it.

For many who perceive FRT as unbiased and objective, instances such as Barre, AB, Abdulle, and others may seem commendable, since immigration authorities utilize it to detect what may appear to be cases of immigration fraud.[88] However, this conception overlooks the significant risk of inaccuracies inherent in the technology, and the fact that individuals from certain racial groups are significantly affected by its predictive inaccuracy. Like Calvin D. Lawrence noted, “[w]hen [AI] tech goes wrong, it often goes terribly for people of color.”[89]

Jim Crow laws were intentionally crafted to undermine the achievements Black people in America attained during Reconstruction, the period following the American Civil War in the 19th century. These accomplishments ignited a civil rights movement in North America that played a crucial role in dismantling Jim Crow’s racism.[90] Today, contemporary AI technologies, such as FRT, are subtly and unintentionally reincarnating the discriminatory practices of the past. These technologies risk undoing the progress made by the Civil Rights Movement, working in a similarly insidious manner to how the Jim Crow laws functioned. Therefore, we face an urgent need for a new civil rights movement, one focused on technology, to safeguard the societal gains we have made as a society. This is a clarion call to action, urging us to recognize and combat the AI manifestations of systemic racism before they erode the foundations of equality and justice in our society.

Conclusion

We stand at a pivotal moment in the interplay between technology and race. The parallels drawn between the racial biases embedded in FRT and the systemic racism of the Jim Crow era highlight not just a technological issue but a profound and novel racial justice crisis. As has been seen through various examples and judicial litigation, the deployment of FRT in immigration processes risks perpetuating discriminatory practices that society has long struggled to overcome.

The cases of Barre, AB, Abdulle, and others underscore the need for transparency, accountability, and procedural fairness in the use of FRT by the Canadian immigration and border control authorities. The refusal to disclose the technological underpinnings of decision-making processes not only undermines trust in these institutions but also veils the potential for inherent biases within these systems. While this paper does not advocate for the complete abolition of FRT as suggested by Hamid, there remains a compelling challenge. The challenge lies in not only improving the accuracy of FRT across racial lines but also ensuring its application aligns with the principles of transparency, justice, and equality that form the bedrock of Canadian society. This approach could entail a moratorium on the use of this tool in vital immigration processes, like refugee status revocation, until these principles are enshrined in policy and practice.[91]

This research analysis illustrates the urgent need for a regulatory and ethical framework that addresses the complexities of using AI in sensitive societal domains. Such a framework must prioritize the protection of individual rights, particularly individuals from marginalized communities who are most at risk of being adversely impacted by biases in AI technologies. It calls for a concerted effort among technologists, policymakers, civil society, and affected communities to engage in a dialogue aimed at reimagining the role of AI technologies in society. This dialogue must be rooted in an understanding of historical injustices and a commitment to preventing the reemergence of Jim Crow in new digital forms.[92]

Furthermore, the discussion around FRT and systemic racism extends beyond the boundaries of immigration and touches on broader issues of surveillance, privacy, and social control. The normalization of surveillance technologies under the guise of security and efficiency poses significant questions about the kind of society we want to build and the values we wish to uphold. As Sarah Hamid’s abolitionist stance suggests, the uncritical adoption of technologies like FRT risks entrenching carceral logics into the fabric of daily life, reinforcing rather than dismantling structures of oppression.[93]

The research concludes with a call for a technological civil rights movement. Such a movement would advocate for the ethical development and deployment of AI technologies, ensuring they serve to enhance human rights and equality rather than diminish them. It would also push for the right of individuals to challenge the decisions made by or with the assistance of AI technologies, thus upholding the principles of procedural fairness and transparency.

As we move forward, it is imperative that we critically examine the technologies we adopt and their impact on society. The lessons from the past must guide our path forward, ensuring that technological advancements contribute to a more just and equitable world. This pathway requires vigilance, advocacy, and a willingness to challenge the status quo, ensuring that the digital future we build is inclusive, equitable, and reflective of our highest aspirations as a society.[94]