Article body

Introduction

The expanding use of autonomous decision-making tools by governments, law enforcement authorities, in employment, or in commercial transactions, raises concerns that have given rise to policy debates and legislative reform worldwide.[1] The increased algorithmic personalization of e-commerce transactions and their compliance with anti-discrimination law fall within those concerns.[2] Algorithmic price personalization (APP) is a form of differential pricing practice whereby suppliers set prices based on consumers’ personal information, with the objective of getting as close as possible to their maximum willingness to pay.[3] This article delves into APP’s compliance with anti-discrimination law (i.e., the body of law comprising Canadian federal and provincial human rights codes).[4] The questions posed by APP’s compliance with anti-discrimination law tie in with pressing issues concerning the deployment of artificial intelligence (AI) systems and the need for proper regulation.[5]

In an anti-discrimination law context, other than the harsh consequences of denying access to credit or allowing it at prohibitive cost or charging higher insurance rates, online purchases of goods or services involving potential discrimination on prohibited grounds (e.g., race, age, gender, sexual orientation, or family status) receive relatively less attention. Discrimination arising from day-to-day purchases may at first glance seem trivial and undeserving of a human rights complaint or further academic inquiry. The consequences of paying a few extra cents or dollars on a product or service, even as a result of a prohibited ground of discrimination, pale in comparison with being denied a job or housing based on such prohibited grounds or failing to accommodate one’s disability in an educational environment. However, with the intensification of algorithmically driven decision-making in all spheres of e-commerce, the compounded effects of prohibited discriminatory practices, if left unaddressed, defeat the raison d’être of anti-discrimination law, especially when differential treatment leads to the further marginalization of a protected group. This article looks into APP’s compliance with anti-discrimination law from a broad range of e-commerce transactions.

In addition to questions of anti-discrimination law, the legality of APP has given rise to numerous studies, commentary and reports from the perspectives of privacy and personal data protection, competition, contract, and consumer law, which are beyond the scope of this article.[6] In these studies and reports, APP compliance with anti-discrimination law is often mentioned as one of the main legal concerns, without providing a detailed analysis of how APP may contravene such law.[7] Similarly, there is relatively little academic literature that scrutinizes the legality of APP under anti-discrimination law, with some exceptions.[8]

The main contribution of this article is to fill this gap by investigating the instances in which the commercial practice of APP, with its specificities, contravenes Canadian anti-discrimination law. As much attention is turned to regulating AI systems to minimize the risk of harm, including the harm caused by discriminatory biased outputs,[9] understanding what may or may not violate anti-discrimination law is critical. The additional contribution of this article is to bridge traditional anti-discrimination law with emerging AI governance regulation, using the gaps identified in anti-discrimination law to show how AI governance regulation could enhance anti-discrimination law and improve compliance.[10]

Part I of this article defines APP and its various applications in e-commerce. Part II examines the treatment of supplier pricing practices in anti-discrimination law. This analysis includes how APP may lead to prima facie discrimination, how human rights codes address potentially countervailing social and economic norms in commerce relative to, for example, age or gender, and how algorithmically generated discrimination may be difficult to detect and prove. It also scrutinizes how the bona fide requirement applies to APP, by which a prima facie discriminatory practice or standard is absolved from a human rights code violation, with reference to the specific case of insurance contracts. Part III lays out how the analysis of APP and anti-discrimination law presented here may inform emerging models of AI governance regulation. In turn, it queries how such new forms of AI governance regulation may complement and improve compliance with anti-discrimination law in the future. The article concludes with a reminder of the broader issues raised by APP and other forms of AI systems.

I. Algorithmic Price Personalization (APP)

APP refers to the commercial practice by which firms set prices according to a consumer’s personal characteristics, targeting as much as possible their maximum willingness to pay (or the reservation price).[11] Often referred to as “perfect price discrimination,” it contrasts with versioning (offering different prices for different versions of a good or service)[12] or group pricing (charging different prices to different groups of consumers based on a personal characteristic they share (e.g., age, gender, or student status).[13] Differential pricing on certain prohibited grounds of discrimination, such as age and discount price for seniors, or age and gender for insurance contracts, is an established exception in anti-discrimination law that has been justified by countervailing societal goals or accepted industry norms.[14]

APP should not be confused with dynamic pricing where prices vary based on offer and demand.[15] Dynamic pricing is a common practice that prevails in the airline and hospitality industries. Given the opacity of pricing techniques and the limited ability to distinguish between APP and dynamic pricing, the line between APP and dynamic pricing may be blurry at times.[16] APP also differs from price steering (tailoring the order with which offers for goods or services are listed) or targeted advertising (the selection of advertising displayed to the consumer). For those commercial practices, firms will differentiate between buyers by using their personal characteristics. However, such differentiation does not influence the price per se.[17]

While earlier economic studies have been guarded as to the extent to which APP occurs, notably due to a lack of substantiated empirical research and the traditional economic theory requirements for APP (or first-degree price discrimination) to take place, there is growing evidence that firms are resorting to APP in online transactions.[18] APP is also likely to occur in payment-less brick-and-mortar retail stores.[19] Studies of price personalization directly relevant to anti-discrimination law report differential treatment for consumer credit ratings and insurance premiums based on race.[20] In other instances, differential treatment may result from pricing relying on criteria other than prohibited grounds (e.g., postal code areas) while potentially contravening anti-discrimination law.[21] Sometimes, price personalization leads to the sheer exclusion of a person from the market, such as by the refusal to provide credit or personal insurance.

The traditional economic theory preconditions for APP to occur are (i) the ability to assess consumers’ individual willingness to pay, (ii) the absence of or limited arbitrage,[22] and (iii) the presence of market power.[23] These preconditions need to be reconsidered in the online environment. Increasingly powerful tools using personal data are deployed by suppliers to influence online consumer purchasing decisions.[24] This influence may lead to “micro-market place chambers,” where consumers’ judgments of competitive alternatives are blurred.[25] This phenomenon is amplified for customers of large retail or service platforms (such as Amazon and Uber) where market power and control may hide beneath seemingly competitive prices.[26] APP may even occur for goods or services susceptible to arbitrage (i.e., which can be resold) or in (imperfectly) competitive markets.[27]

Various surveys indicate a strong consumer dislike of APP, which is viewed as unfair.[28] Such negative perceptions go beyond instances whereby APP would differentiate on a prohibited ground. They include any form of price personalization based on consumer profiling through the use of their personal information.[29] As a result, one can reasonably predict that retailers will either refrain from the practice or conceal it so as not to upset their consumer base. In fact, the ability to hide APP is arguably another precondition for it effectively taking place.[30] This lack of transparency impacts the burden of proof in allegations of discrimination on prohibited grounds.[31]

II. Supplier Pricing Practices and Anti-Discrimination Law

A. What Constitutes Discrimination under Human Rights Codes?

Federal and provincial human rights codes provide the right for individuals to equal treatment through enumerated prohibited grounds of discrimination. Relevant to APP is the protection against discriminatory treatment in the provision of goods or services generally available to the public[32] or to contract on equal terms.[33] These prohibitions against discrimination include the refusal to sell goods or services, or differential terms, such as the refusal to provide personal financing, or to do so on prohibitive terms. The Canadian Human Rights Act [CHRA][34] applies to federally regulated undertakings (such as banks, airlines, rail enterprises, and Crown corporations).[35] The Human Rights Code of Ontario [HRCO][36] and similar legislation in other provinces[37] apply to individuals and organizations whether in the public or private sector, unless they fall under exclusive federal jurisdiction.[38] This legislation forms the main body of anti-discrimination law applicable to commercial undertakings and practices such as APP.[39]

Prohibited grounds of discrimination are similar among the federal and provincial human rights codes, with some variations.[40] For instance, the HRCO prohibits discrimination on the basis of “race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, marital status, family status or disability.”[41] All those prohibited grounds are potentially relevant to APP, given the precision with which algorithms deployed in e-commerce can accurately determine our demographic information, thereby profiling our “biographical core.”[42]

The Supreme Court of Canada has reiterated the quasi-constitutional status of human rights codes on numerous occasions.[43] The rights they provide warrant a large and liberal interpretation and the exceptions thereto are narrow.[44] The codes prohibit direct and indirect discrimination (including constructive discrimination) irrespective of intent.[45] Unlike direct prohibited discrimination,[46] indirect discrimination may occur when otherwise neutral considerations or policies in contractual terms, or in the offer of goods or services, nevertheless negatively impact certain groups.[47]

The Supreme Court’s decision in Meiorin[48] has brought a unified approach to discrimination, whether direct or indirect.[49] The complainant bears the burden of establishing discrimination prima facie[50] by showing (1) that they have a characteristic protected from discrimination under the relevant human rights code; (2) that they experienced an adverse impact with respect to the provision of the good or service; and (3) that the protected characteristic was a factor in the adverse impact.[51] Despite this unified approach to discrimination, practical differences remain between direct and indirect discrimination, including potentially greater evidentiary hurdles to establish indirect discrimination, even if the requirement is only prima facie.[52]

The following section examines how commercial practices surrounding pricing terms or access to publicly available goods or services were held to be prima facie discrimination by courts or tribunals, with additional examples from research reports. It provides further examples of how APP may lead to direct or indirect discrimination. It then investigates why pricing practices or access to goods or services that appear discriminatory on their face may not necessarily violate human rights codes and how this is relevant to APP. It also raises some of the difficulties inherent to APP in establishing prohibited forms of discrimination.

Pricing practices involving direct prima facie discrimination include the refusal to offer personal financing or to do so on prohibitive terms based on race.[53] They also include charging higher automobile insurance rates according to age or gender, which is a long-established form of APP in the insurance industry. The fact that some human rights codes explicitly allow these forms of discrimination in insurance contracts[54] does not negate that discrimination occurs within the ambit of the relevant human rights code.[55] A provider of goods or services must nonetheless prove that such discrimination is justified under a bona fide requirement.[56] Any other form of APP where the prohibited ground is directly embedded as a determinant factor in the pricing algorithm could amount to prima facie discrimination. Other examples of direct discrimination surrounding pricing terms include a restaurant asking black patrons to prepay for their meals while not imposing a similar condition on others.[57] Differential levels of access to commercial venues or services for people with a disability have also been held to be prima facie direct discrimination.[58]

Indirect prima facie discrimination on pricing terms could include online concert ticket sales whereby accessible seats are sold at a higher price than non-accessible ones, given their specific locations in selected seating areas.[59] Similarly, indirect prohibited discrimination could occur when banks or other lenders charge more for their services in a “financial desert” due to the lack of competition and when they suspect that customers are unlikely to shop around. Discriminating on the basis of such geographic location could constitute indirect discrimination if it disproportionately affects protected (e.g., racialized) groups under human rights codes.[60]

Not all forms of differentiation, even on protected grounds, are prohibited under human rights codes.[61] In addition to built-in exemptions for insurance contracts[62] or differential treatment based on age,[63] countervailing social norms or the difficulty in establishing prejudice will bar a discrimination claim from succeeding. The section that follows explores the blurry contours of commercial practices such as the “pink tax,” “ladies night,” or single-gender clubs[64] and how they may or may not contravene anti-discrimination law. This exercise leads to a more nuanced view of what constitutes prohibited forms of discrimination, with important ramifications for APP.

The “pink tax” is a grey area of anti-discrimination law directly relevant to APP. It refers to commercial practices whereby women’s clothing and products are priced higher than similar clothing or products marketed to men. On its face, the “pink tax” could be a prohibited form of discrimination based on gender. However, in practice, legislative efforts and legal action attacking it have often been unsuccessful.[65] Similarly to the difficulty in distinguishing APP from dynamic pricing[66] or price versioning,[67] the lines between what constitutes prohibited discrimination and product differentiation may be blurry in the case of the “pink tax.” While detailed studies demonstrate the “pink tax” phenomenon,[68] lawmakers are not readily convinced that it amounts to a prohibited form of discrimination.[69] Two products might be identical in production and content, yet there can be differences impacting pricing, such as the higher marketing costs for women’s products. This argument was successfully made against the claim that the “pink tax” was a prohibited form of discrimination.[70] Similarly, courts have taken the debatable position that women are not constrained to solely purchase products and apparels marketed to them, concluding that the commercial practice did not violate anti-discrimination law.[71]

If the pink tax essentially results from assumptions that women are willing to pay more for certain goods and services,[72] and to the extent that it demonstrably amplifies wealth inequality between men and women,[73] then the pink tax could constitute prima facie discrimination. APP, which is about getting as close as possible to a buyer’s maximum willingness to pay, could exacerbate this phenomenon. That is, to expand the pink tax to gender-neutral goods or services, beyond women’s clothing and products. This could warrant legislative attention tackling the issue at a systemic level, rather than at the level of a single transaction, to overcome the burden to prove adverse impact that is more than de minimis.

There exist other commercial practices which at first glance are discriminatory on prohibited grounds, but do not necessarily contravene human rights codes, such as “ladies night” (i.e., offering a lower cover charge or other preferential treatment to women in bars or restaurants). Women-only facilities open to the public also fall in this category. Claims that these practices are prohibited forms of discrimination based on gender have been unsuccessful.[74] Even if, in such examples, men represent a protected group[75] and their gender is a factor of differentiation, it would be difficult for them to demonstrate that they have suffered an adverse impact under the prima facie discrimination test.[76] Such impact should not be de minimis. It includes humiliation, the perpetration of stigmatization, or arbitrary detrimental exclusion.[77] For instance, refusing to offer personal financing or to do so on prohibitive terms with respect to a racialized or traditionally marginalized group would likely result in such negative impact. In contrast, men will have a harder time establishing that they suffered such prejudice as a result of the exclusivity of women-only facilities or preferential pricing for women.[78] And where a complainant may be able to show injury, such as significant cost difference or prohibited access with no other reasonable alternative, the differential treatment may still be justified by countervailing social norms, or benefits to another protected group under the bona fide requirement.[79] In the case of APP, the necessity of showing a detriment that is more than de minimis (e.g., a small price variation) may pose a challenge in establishing prima facie discrimination, unless the cumulative effects of APP can be factored in.

There are other factors specific to APP that make its compliance with anti-discrimination law difficult to ascertain. First, APP offerings are continually subject to change. Discrimination will be difficult to detect, as there might be no base reference price against which to establish it. Second, APP can appear in the form of real-time dynamic pricing.[80] Third, and more generally, the lack of transparency surrounding confidential business information regarding algorithms, data sets, and the criteria upon which algorithms set prices, may impair the ability to make a successful claim of prima facie discrimination.[81] For these reasons, and to the extent that determining a buyer’s maximum willingness to pay may in some cases automatically entail differentiation on prohibited grounds, APP may give rise to even more human rights code violations than in standardized retail pricing environments.

In short, the commercial practice of APP may lead to direct or indirect discrimination contrary to human rights codes—potentially more so than in brick-and-mortar businesses—despite the difficulties in detecting, ascertaining and establishing prima facie discrimination. That said, not all forms of differentiation, even on prohibited grounds, contravene anti-discrimination law. In some cases, the need to demonstrate an adverse impact resulting from pricing practices or access differentiation will be a challenge in establishing prima facie discrimination. This reality reveals a more nuanced application of anti-discrimination law to APP.

B. Whether the Discriminatory Standard Is Based on a Bona fide Requirement

Assuming a prima facie case of discrimination is proven with respect to APP or another commercial practice, discriminatory practices may still be allowed if they meet the bona fide requirement exception.[82] This exception applies in the employment context or to suppliers of goods or services to the public with necessary variations, as dictated by the language of the relevant human rights code.[83] For instance, a variation of the bona fide exception applies to prima facie discriminatory practices in insurance contracts.[84]

In order to benefit from the exception, the respondent must establish on a balance of probabilities that (1) the supplier of goods or services adopted the standard for a purpose rationally connected to the provision of the good or service; (2) the standard was adopted in the good faith belief that it was necessary to the fulfillment of that legitimate purpose; and (3) the standard is reasonably necessary to the accomplishment of that legitimate purpose.[85] The respondent must therefore demonstrate the impossibility of accommodating the claimant without imposing undue hardship upon themselves as a goods or services provider.[86] This undue hardship could include prohibitive financial costs, increased safety risk, or undermining the positive benefits to women patrons at women-only fitness club facilities.[87] As an exception to anti-discrimination principles, the bona fide requirement must be interpreted restrictively.[88]

With APP, the question arises as to whether the respondent can prove that the prima facie discrimination is rationally connected to the provision of goods or services. As set out in Meiorin, the business practice cannot meet the bona fide requirement exception unless the purpose for the discriminatory standard is legitimate.[89] In the case of APP, discrimination would be exercised to get as close as possible to the buyer’s maximum willingness to pay.[90] The business rationale of maximizing supplier profits would be, on its own, insufficient to justify prima facie discrimination. Several decades of standardized retail pricing practices without recourse to profiling buyers through their personal data (which, at this stage of the analysis, would be considered prima facie discrimination), weaken the argument of a rationally connected legitimate purpose of the discriminatory requirement. This business rationale contrasts with a legitimate purpose based on the safety of a product or a reasonable customer service requirement. To claim that profit maximization is a legitimate purpose in itself would be contrary to the quasi-constitutional protection offered by human rights codes and would make the principles they enshrine meaningless. As a matter of fact, rare are businesses for which profit maximization is not a primary goal.[91]

In addition, APP would also in some cases fail the second element of the bona fide requirement exception, if the algorithm’s criteria were specifically created to differentiate price on a prohibited ground, leading to prima facie discrimination. Such criteria, coupled with the absence of a mechanism to prevent discrimination contrary to human rights codes, could be characterized as a discriminatory animus motivation.[92]

Insofar as the commercial practice of APP satisfies the first two elements of the bona fide requirement exception laid out in Meiorin, the supplier of goods or services would then have to prove that the discriminatory standard is reasonably necessary (that its removal, after investigating non-discriminatory alternatives, would cause undue hardship to its business).[93] For example, a respondent may argue that modifying algorithms to make pricing compliant under human rights law would engender significant prohibitive costs that would be seriously harmful to their business.[94] General “impressionistic” evidence of increased risks or costs entailed by changing the standard to accommodate the claimant will not satisfy the respondent’s burden of proof.[95] At this stage of the analysis, the burden is strictly a matter of bringing satisfactory evidence of high financial costs or other similar burdens.

As an earlier form of algorithmic price personalization, the case of insurance contracts[96] is instructive regarding the dilemma surrounding sound industry practices and respect for anti-discrimination law principles. The leading decision applying the bona fide exemption for insurance contracts is Zurich Insurance Co. v. Ontario (Human Rights Commission) [Zurich Insurance].[97] In this case, the Supreme Court dealt for the first time with the provision in the HRCO allowing insurance providers to discriminate “on reasonable and bona fide grounds because of age, sex, marital status, family status or [handicap].”[98] The Court acknowledged that the specificities of insurance contracts warrant an adaptation of the bona fide requirement.[99]

Insurance premiums are set by assessing risk among various population groups which sometimes overlap with prohibited grounds, as explicitly recognized by the legislative exemption in the HRCO.[100] The Court stated that an insurance practice is “reasonable” in the context of the specific human rights code exemption if “(a) it is based on a sound and accepted insurance practice” that is “one which it is desirable to adopt for the purpose of achieving the legitimate business objective of charging premiums that are commensurate with risk;” and (b) “there is no practical alternative,” which is a question of fact.[101] Furthermore, the bona fide test is met if the practice “was adopted honestly, in the interests of sound and accepted business practice and not for the purpose of defeating the rights protected under the Code.”[102]

After acknowledging that the practice was undoubtedly sound and based on accepted business practice,[103] relying on credible actuarial evidence for risk assessment, Justice Sopinka, for the majority, noted that assessing the reasonableness of the practice required more.[104] Simply allowing “statistically supportable” discrimination would defeat the intent of human rights legislation. It would “perpetuate traditional stereotypes with all of their invidious prejudices.”[105] The respondent had to demonstrate that it had no reasonable alternative. Justice Sopinka was satisfied on this front. Justice L’Heureux-Dubé and Justice McLachlin (as she then was) disagreed on that point in separate dissenting reasons.[106] For Justice McLachlin, the respondent did not meet the burden of proof through its own failure to collect the relevant data proving that a reasonable alternative in fact did not exist.[107] Confusing the absence of a reasonable alternative with the absence of proof of such an alternative is to shift the burden of proof from the person who is prima facie in violation of the Code to the complainant.[108] For Justice McLachlin, this approach encourages maintaining discriminatory practices rather than reform aimed at achieving the Code’s objectives.[109]

The tensions highlighted in Zurich Insurance between anti-discrimination law and the extent to which insurance providers should be exempted from human rights codes provide relevant insights for APP. They are also reminiscent of current concerns and legislative reform debates on how to regulate the risks of biased outputs produced by AI systems.[110] This includes requiring AI system producers and users to have ex ante mechanisms in place to prevent violations of anti-discrimination law.[111]

In an era of advanced personalized data analytics, should insurance companies still be allowed to build in prohibited grounds of discrimination in their actuarial assessments of risk and corresponding insurance rates? Or, on the contrary, should individuals be assessed on their own merits, freed from the large group risk characteristics of the class(es) to which they belong? The increased lack of uniformity across jurisdictions in Canada, the United States and other countries allowing gender, age, and other forms of discrimination in insurance contracts calls into question the “sound accepted business practice” argument for maintaining at least some of the statutory anti-discrimination exemptions in insurance contracts.[112] Furthermore, evolving societal norms on gender expression and fluidity question the value of gender as a meaningful and distinct categorisation. Some insurance companies have taken note of this evolution and offer different insurance rates to transgender and non-binary people who do not identify exclusively as either female or male.[113] In light of these developments, the impracticality raised by Justice Sopinka in Zurich Insurance, whereby insurance rates would be difficult to determine without reference to specific groups on prohibited grounds of discrimination, is less and less convincing.

Decisions concerning insurance services post-Zurich Insurance suggest that courts and tribunals are willing to accept current discriminatory practices based on age or gender.[114] Justice Sopinka’s warning to insurers to avoid incorporating prohibited grounds of discrimination in their actuarial calculations,[115] and the dissents’ harsh criticism of these practices, did little to prompt insurers to change their ways, or legislators to re-examine the exemptions in human rights codes, at least in Ontario. Thirty years later, the impugned provision of the HRCO is still in force, and other human rights codes contain similar exemptions for insurance contracts.[116]

In short, once a complainant establishes that a form of APP is prima facie discrimination (overcoming the hurdles raised above),[117] the supplier resorting to APP will not be easily exempted under a bona fide business requirement, leading to a violation of anti-discrimination law. Discriminating between consumers on prohibited grounds by profiling them through the collection of their personal information[118] for the sole purpose of maximizing suppliers’ profit would not on its own be a legitimate and reasonable purpose to set aside the right to equal treatment of designated groups enshrined in human rights law. Furthermore, evolving social norms on gender, and increasingly precise big data analytics, call for a re-examination of the justifiability of human rights code exemptions for insurance contracts. The next part examines how our findings may inform ongoing AI governance legislative reform and future regulation, and how such regulation may fill in turn the gaps identified in relation to evidentiary matters and compliance with anti-discrimination law.

III. Algorithmic Price Personalization (APP) and AI Governance Regulation

There are serious concerns supported by empirical research that algorithmic decision-making tools and AI systems perpetrate bias and discrimination on prohibited grounds, whether directly or indirectly.[119] Ongoing Canadian legislative reform on AI governance seeks to address discriminatory outputs by imposing obligations on firms using AI systems to identify, measure, and mitigate their risks of contravening anti-discrimination law, as well as by imposing duties to record and disclose the tasks generated by the AI systems.[120] Under such a regulatory regime, APP would likely be subject to the highest level of obligations, given its categorization as a “high-impact system.” [121] These obligations are applicable to systems that generate decisions on “the type or cost of services to be provided to an individual.”[122] The AI governance regime confers administrative and investigative powers to both the designated Minister and the AI and Data Commissioner, who are empowered to order AI systems to be shut down, with substantial fines for non-compliance.[123] Through its explicit reference to anti-discrimination law, the AI governance regime is meant to reinforce this body of law ex ante.

Without providing a detailed account of all the implications and potential shortcomings of this AI governance regime, the analysis presented here provides some insight into how this and similar regimes may indeed reinforce compliance with anti-discrimination law and its key objectives with regard to APP.

Overall, a successful AI governance regime should decrease the occurrence of prohibited forms of discrimination. Additionally, when claims based on anti-discrimination law arise, it should improve the evidentiary process for claimants. A successful regime will give due weight to the quasi-constitutional status of anti-discrimination law and strengthen compliance for the benefit of members of a protected group, while clarifying the path toward compliance for service providers.

In the case of APP, this article identified two broad categories of problems. Under the first category, it discussed the issues related to the assessment of discrimination. For instance, constantly fluctuating prices make the determination of a reference price difficult to establish, creating obstacles to proving prima facie discrimination. This article also discussed how the lines may be blurry between pricing goods or services on the basis of personal characteristics (which may lead to the violation of human rights codes) and dynamic pricing. Obligations in the proposed federal AI governance regime requiring users of AI systems to assess and mitigate the risk of biased output (i.e., output contrary to anti-discrimination law) entails the written recording of steps undertaken, regular testing, and adjustments toward such risk mitigation.[124] These obligations include ensuring the algorithms do not incorporate prohibited forms of discrimination, and should also include mechanisms by which service providers regularly monitor and investigate pricing trends that differentiate one or more members of protected groups. Such duties would provide the opportunity to rectify their pricing algorithms accordingly. The ways in which firms can actually and effectively implement legal parameters to avoid biased output and comply with anti-discrimination law is complex, giving rise to ample scholarly debates which are beyond the scope of this article.[125]

Under the second category of issues, this article discussed algorithmic transparency problems, leading to evidentiary issues for claimants seeking to establish prima facie discrimination. Obligations requiring service providers using AI systems to maintain an accountability framework,[126] and to disclose it during tribunal or court proceedings, or through a regulatory order (including demonstrating the effectiveness of their anti-discrimination law mechanisms) could alleviate the evidentiary burden of human rights violation complainants.[127] Moreover, such obligations would facilitate service providers’ ability to adequately and more efficiently respond to human rights claims. Overall, the powers to monitor, investigate, shutdown, and sanction non-compliant systems vested in the AI governance authority should encourage greater compliance with anti-discrimination law.

The effects of compliance with AI governance regimes could include a reduction of differentiation on prohibited grounds of discrimination even if, as discussed earlier, such differentiation is not always in contravention of anti-discrimination law.[128] In other words, the difficulty in implementing anti-discrimination law, with all its nuances and subtleties, could lead firms to more restraints on differentiation than what is legally required. Furthermore, at a time when some exemptions allowing discriminatory practices for insurance contracts no longer seem justified,[129] another side effect of firms’ implementation of AI governance regime requirements might be to revisit, if not eliminate, such long-established discriminatory practices altogether.

Conclusion

This article tackled an underexplored area of law: how APP may comply with or contravene anti-discrimination law in Canada. It examined several examples where APP can lead to direct or indirect discrimination. By its nature, APP may be hard to detect, therefore rendering prima facie discrimination difficult to establish. The analysis also showed that not all price differentiation, even if based on prohibited grounds, necessarily violates anti-discrimination law, presenting a more nuanced view on the scope of application of this body of law. This article also explored the long-established exception of discrimination for insurance contracts, and whether it remains justified in the current context of changing social norms and a world of highly personalized datafication.

To the extent that a claimant overcomes the obstacles to successfully establishing discrimination prima facie, APP will not be easily exempted under a bona fide requirement, given APP’s lack of a legitimate business purpose under the stringent test of anti-discrimination law and its quasi-constitutional status.

This article bridged traditional anti-discrimination law with emerging AI governance regulation. It used the gaps identified by applying anti-discrimination law to APP to show how AI governance regulation could enhance anti-discrimination law and ensure greater compliance.

APP and the intensification of personalization in e-commerce raise broader issues beyond anti-discrimination law, such as in personal data protection law, competition law, and consumer law.[130] Similarly, the commercial practice of APP is one among many uses of algorithmic automated decision-making that give rise to serious societal concerns which hopefully will be addressed, at least in part, by AI system governance regulation.