Abstracts
Abstract
Artificial intelligence (AI) is variably identified as “job killer,” as “inhuman,” as “unpredictable” and “ungovernable,” but also as the greatest technological innovation in generations. Lawyers struggle with AI across a host of legal fields, including consumer protection, constitutional, employment, administrative, criminal, and refugee law. While AI is commonly discussed as a question of technological progress, its core challenge is a political one. As AI is used as a tool to review employment recruitment files, assess loan, mortgage, or visa applications and to collect and process data of “suspicious” actors, it deepens existing inequalities and socio-economic vulnerability. Given the rapidly expanding reach of AI into most facets of social, economic, and political life, AI shapes people’s access to democratic life in an unprecedented and increasingly precarious manner. Efforts to address its promises and perils through a lens of “AI ethics” can therefore hardly capture the scope of challenges which arise from AI. Seen from a historical perspective, then, AI accentuates and reinforces trends of inequality, social alienation, and political volatility which began long before AI’s implications in society’s daily lives.
Résumé
L’intelligence artificielle (IA) est perçue à la fois comme « tueuse d’emplois », comme « inhumaine », « imprévisible » et « ingouvernable », mais aussi comme la plus grande innovation technologique depuis des générations. Des juristes confrontent l’IA à travers une multitude de domaines juridiques, allant de la protection des consommateurs au droit constitutionnel, droit du travail, droit administratif, pénal et des réfugiés. Pendant que l’IA est considérée sous l’angle technologique, son principal défi est d’ordre politique. Utilisée pour le recrutement, l’octroi de prêts, d’hypothèques ou de visas, ainsi que pour la surveillance, elle aggrave les inégalités existantes et accroît la vulnérabilité socio-économique. Compte tenu de la portée croissante de l’IA dans différents aspects de la vie sociale, économique et politique, l’IA façonne l’accès des citoyens à la vie démocratique d’une manière sans précédent et de plus en plus précaire. Les réponses axées sur « l’éthique de l’IA » peinent ainsi à saisir l’ampleur des enjeux qu’elle soulève. D’un point de vue historique, l’IA accentue et renforce les tendances à l’inégalité, à l’aliénation sociale et à l’instabilité politique qui ont commencé bien avant que l’IA n’intervienne dans la vie quotidienne de la société.
Article body
I. Artificial Intelligence as Epistemological and Political Challenge
The debate around artificial intelligence (AI) has become so multifaceted and multidimensional that an effort to situate a particular sector’s or discipline’s place within it requires several steps of stock-taking, contextualization, and translation across and between different rationalities. Substantive strands of discussion have addressed the technical aspects of machine learning, coding, and automated decision-making (ADM), while other debates focus on the implications of AI-driven problem-solving and organizing for questions of political authority, legitimacy, and privacy.[1] Other discussions have highlighted the vast scope of AI technology’s application across private and public sector services and institutions, which raises complex issues of transparency and accountability. Notwithstanding these differentiations, it is the ubiquity of AI and the assumption that its growth and expansion are inevitable which continue to prompt questions about the normative, ethical significance of AI—not only in relation to the modes of societal governance, but also to human nature.
As is typical in moments of high stakes and heightened complexity, one tends either to oversimplify in the name of rendering something “manageable” or to run away and lose oneself in hyperbole or outright delusion.[2] Echoing the burgeoning variants of views, fears, hopes, and doom pronouncements around AI, political scientists studying the implications of climate change have observed “a striking tension between attempts to depoliticize climate change by referencing science-based trajectories and technological fixes, while at the same time, social movements and other political actors openly politicize climate change by relating it to issues of justice, societal struggles, and political order.”[3] As for law, the obstacles are equally overwhelming, revealing not only the challenge of how to relate legal instruments to a new quality of tech-driven forms of contracting, service delivery, surveillance, and information storing, but also prompting lawyers to reflect on the nature and adequacy of law itself in the presence of artificial intelligence. As Arnaud Sée notes: “Regulation itself is not so much a question of law but one regarding the discourse around law.”[4]
It is on this scale that AI in many ways overlaps and intersects with other system challenges the law faces. A key one to consider here is climate change, which presents humanity with arguably overwhelming questions with regard to macro-economic policy-making; the regulation of natural resource extraction, manufacturing, and transport; as well as individual and collective consumer behavior and education.[5] In very similar, comprehensive ways, AI too must be understood as a political and epistemological challenge, sharing with climate change many of its characteristics in terms of technical and normative complexity.[6] Both are among the most existential challenges humanity has faced. The following reflections remain preliminary and may merely draw out, in broad strokes, some of the normative and epistemological implications of the type of specific investigations at the heart of the contributions to this issue of the McGill Law Journal—and within the wider research on AI—as it continues to proliferate at breathtaking speed.[7]
A. Cogito Ergo Sum in the Age of Artificial Intelligence
In writing the preamble to the Prêt pour l’IA report (Report)—authored by Sarah Gagnon-Turcotte and Réjean Tremblay and published in Montreal in January 2024—Luc Sirois, the head of Quebec’s Innovation Council (Council), emphasized that artificial intelligence cannot evolve outside a legal frame.[8] The comprehensive research, which constitutes the basis of the Report, was conducted over nine months and incorporates input and expertise from more than 250 participants and contributors. The Report offers a wealth of pertinent observations which complement a growing body of tangible policy work, and which has the immediate benefit of concreteness and applicability. Given the complexity of the Council’s findings, no single recommendation is allotted more prominence than another. Even so, Sirois’s suggestion to understand the study as only the beginning of a larger reflection about our future with AI[9] can serve as a first key to unlocking the potential of the Report, which also touches on the concrete economic benefits of public—including provincial—investments in AI.[10] The ambitious four pillars on which that reflection should rest illustrate the stakes of this undertaking. Hereto, the Council highlights the crucial importance of a) mapping the densifying landscape of AI applications, b) the significance of enhancing and improving the education around the uses and challenges of AI, c) the intensification of research into AI, and d) the securing of Quebec’s commitment to support the continued digitization of public services; for example, in the health and transportation sectors.[11]
The notion of “encadrement”—framing—is the Report’s red thread and arguably drives its core intervention. It gains even more importance in light of the Report’s insistence on the interdisciplinary character of AI and the ensuing need for governmental support to facilitate and strengthen interdisciplinary collaborations between academics, industry partners, experts, and civil society members.[12] Prêt pour l’IA pursues the creation of a robust and explicitly legal framework for artificial intelligence while making explicit reference to the widely noted “Montreal Declaration for a Responsible Development of Artificial Intelligence,” announced on 3 November 2017.[13] The latter has since been regarded as an important milestone in arguing for the need to understand the engagement with AI as a collective, societal, and democratic challenge.[14] The Report also recognizes the distinctly interdisciplinary nature of AI[15] and provides an excellent map of existing and emerging AI-focused legislation, institutional initiatives, and investment-related advances on the provincial and federal levels, as well as in the United States and the European Union. Even so, its findings are not always reassuring, especially when it comes to the existing deficiencies with regard to a robust, cross-disciplinary, and multi-departmental culture of research and education on AI.[16]
As has been the case with the “Montreal Declaration,” the Report contributes to a now global debate around “AI ethics,” which, due to its potentially infinite range of issues under consideration, in and of itself is an inherently precarious and volatile undertaking. With AI assuming an ever-growing role in both public and private decision-making processes, the normative implications of AI ethics are significant. An engagement with the ethics of AI-driven governance requires a close scrutiny of the political economy of how power is allocated, used, and held accountable.[17] In aspiring to provide policy-makers and regulators with concrete, tangible, and thoroughly scrutinized recommendations, AI ethics must build on ongoing, critical, and insightful assessments of an evolving AI governance landscape.[18] Due to AI applications’ enormous impact on all aspects of public and private life, there is no “light touch” approach to engaging with AI and its significance.[19] As per Luciano Floridi:
The digital “cuts and pastes” reality, in the sense that it couples, decouples, or recouples features of the world—and therefore our corresponding assumptions about them—which we never thought could be anything but indivisible and unchangeable. It splits apart and fuses the “atoms” of our experience and culture, so to speak. It changes the bed of the river, to use a Wittgensteinian metaphor.[20]
This suggests that what continues to be needed are ambitious and bold investigations into the ethical questions that a transformative and arguably unlimited technology such as AI presents us with. As Pieter Verdegem recently remarked:
The confluence of factors—the availability of powerful computing capacity, new techniques in machine/deep learning leading to more sophisticated algorithms and the growing availability of data with which to train these algorithms—enable AI to be deployed far more extensively. AI now seems ready to have a deep impact on our society and economy.[21]
As machine learning technology continues to advance, questions regarding transparency, accountability, or even “explainability” will only become more pressing.[22] Similarly, Kate Crawford, author of the illuminating study, Atlas of AI (2022),[23] highlights that large language models should be seen as the most important technological innovation since the World Wide Web.[24]
B. Voice and Agency in AI Discourses
With that, another problem presents itself. The taken-for-granted first person in proliferating policy lectures, white papers, and reports on the risks and benefits of AI wrongly assumes voice and agency for those who have either never been on the radar, or are otherwise intentionally excluded from the deliberative discourse universe which many AI and AI-ethics discussions seem to take for granted.[25] Unpacking the silencing uses of “we” in these and similar discussions[26] requires a critical inquiry into the speakers’ and spokespersons’ unreflected positionalities.[27] In much of the literature on burgeoning AI applications and their associated ethical challenges, there is a habitual proposition of a universal, all-inclusive vantage point in terms of speaking of a “we,” “our future,” or ever—more hyperbolically still—“the future of humanity.” Such assertions repeat long-standing practices of marginalization and exclusion.[28] The uncritical use of “we” and “us” reveals a notion and preconception of neutrality and universality that stands in stark contrast to AI’s highly uneven interventions in different communities.[29] In a recent interview, Crawford underlined that:
AI systems are profoundly material. But the imaginaries of AI are ethereal: visions of immaterial code, abstract mathematics, and algorithms in the cloud. On the contrary, AI is made of minerals, energy, and vast amounts of water. Data centers generate immense heat that contributes to climate change and the construction and maintenance of AI depends on underpaid labor in the Global South.[30]
The sheer scope of AI’s broad societal impact poses significant challenges in terms of where to direct any ethical demands. Given they are alive in philosophical and policy discussions, they simultaneously shape concrete and tangible interventions in the spheres of work and production,[31] education,[32] health,[33] and housing,[34] to name just a few.[35] Such concerns echo the prevailing sentiment of destabilizing personal fatigue which feeds into a state of collective disillusionment and apathy with regard to shrinking prospects of a more equitable and sustainable future. It therefore does not come as a surprise that ethical anxieties already accompanied the very first iterations of AI related machine learning,[36] and seem to be expanding today in breathtaking tandem with the staggering proliferation of AI applications.[37] It is wise to take seriously the mental health costs of private lives increasingly shaped by AI and significantly aggravated during the pandemic.[38] As per Jonathan Crary:
For the majority of the earth’s population on whom it has been imposed, the internet complex is the implacable engine of addiction, loneliness, false hopes, cruelty, psychosis, indebtedness, squandered life, the corrosion of memory, and social disintegration. All of its touted benefits are rendered irrelevant or secondary by its injurious and sociocidal impacts.[39]
While on a colloquial, quotidian level, everyone speaks about “too much screen time,” “listening phones,” and the sheer ubiquity of data-collecting devices, the digitalization of human interaction and the ways we spend our time online have long become the topic of intense scholarly inquiry.[40] The global COVID-19 pandemic amplified and aggravated already existing trends towards screen addiction, isolation, and alienation.[41] “Doomscrolling,” a term “coined in 2018 ... [referring] to a state of media use typically characterized as individuals persistently scrolling through their social media newsfeeds with an obsessive focus on distressing, depressing, or otherwise negative information,” captures a particularly dark place in people’s experience of their warped interactions with others while persistently on a screen.[42] AI has, and will continue to, play a key role in the constitution of such dark places, contributing to a time- and energy-devouring immersion into accelerating moving images and information in a context of “digital capitalism.”[43] As critical data scholars have argued, the key in this stage of economic development is the profound degree of data extractivism, which unfolds through the sheer unlimited mining of personal data through technological means and their use for a wide variety of commercial, military, and security applications.[44] The high degree of societal penetration of digital technology into every aspect of people’s lives, including “finance, healthcare, criminal justice, and hiring,”[45] has become a defining feature not only of the economic system, but of the totality of social relations as such, raising particular fears around privacy protection and AI’s unevenly distributed socioeconomic benefits.[46]
II. Computer Ergo Sum
AI applications themselves prompt intellectual and emotional responses ranging from ignorance to consternation, fear to terror, and wonder to renewed religious belief.[47] In all that, AI appears to break down the boundaries on which much of Western, post-Descartian human understanding has come to rest—cogito ergo sum.[48] As Norbert Wiener dryly remarked in his famous 1960 essay:
Disastrous results are to be expected not merely in the world of fairy tales but in the real world wherever two agencies essentially foreign to each other are coupled in the attempt to achieve a common purpose. If the communication between these two agencies as to the nature of this purpose is incomplete, it must only be expected that the results of this cooperation will be unsatisfactory.[49]
Since its inception, AI has not only challenged and—increasingly believably—threatened to undermine the differences between human and robot, but it has also turned on its head human-based conceptions of autonomy and (rational, ethical, and accountable) decision-making.[50]
What results is more than a technology in the sense of an even complex array of instruments. Rather, AI emerges as a spatialization of processes of human-non-human interaction, as well as machine-machine interaction, in which it becomes a question of pressing epistemological and political urgency of “who is in charge.” Spatialization here refers to the creation—through technological means—of spaces which exist independently of, say, legal jurisdictional or politically defined territorial boundaries. The encroachment of AI into different spaces—for example, in the areas of delivery and administration of humanitarian aid—raises deep concerns regarding the transparency and accountability of power exercised within these spaces.[51] These concerns have been addressed with growing intensity by a wide range of humanities scholars who remain skeptical of pronouncements that democratic practices can survive AI unscathed.[52]
A. AI: For, Against, and (Used) by Lawyers—the “Framing” Prerogative
For lawyers, the finding of answers to this challenge is an urgent concern as the AI-driven and AI-based processes of ADM constitute a governance regime that arguably eludes many well-tested conceptual and doctrinal approaches to both problems of authority and legitimacy.[53] A decisive element here is the much lamented and yet by now deeply entrenched shift from public to private agencies for a growing (and, perhaps, uncontainable) range of services and institutional processes. While the use of algorithmic governance in the form of automated decision-making and other uses of AI poses distinct problems of review and accountability, these challenges are exacerbated in the private sector, not least because the actual locus of decision-making power can be harder to identify than in a formally structured, public institutional infrastructure.[54] The emerging spaces of AI cut across legal and political boundaries and challenge existing understandings of political authority and democratic legitimacy. As Katharina Pistor argues:
[D]igital power needs people to produce the raw material on which its power rests, but is less dependent on territory. It is exercised not through physical coercion, but by surveilling and shaping the behavior of individuals and groups indirectly. In doing so, digital power benefits from information asymmetries between the data harvesters and their clients on one hand, and the data producers who also serve as targeted customers, on the other.[55]
In fact, it is a political economy lens that renders visible the degrees of continuity as well as amplification of the application of AI to functions of economic and financial governance.[56] For example, what comes to the surface are glimpses of the deeply transformative dynamics of assetization, the roots of which lie in the all-encompassing financialization unleashed since the 1970s[57] and whereby any good, service, and the agents themselves—along with their state of health, income and future prospects—eventually became data points for an insatiable information-processing and value-extracting infrastructure.[58]
B. Financialization as Historical Inheritance and Condition
It follows that when one speaks of the challenges arising from AI for law, it is imperative to acknowledge the challenges of doing justice to AI as a distinct realm of technological innovation. Additionally, it is crucial to assess both its integration into continuing adaptations of production, governance, and information processes and its disruption of democratic processes of deliberation and accountability. To again reference the Quebec Report of January 2024, it is the encadrement—that is, the framing of AI, its evolution, and its applications—that the Report’s authors argue must be approached as part of a wider process of critical engagement with an economy which over time has structurally entrenched pre-existing and privileged positions of power and access.
It is hard to imagine, let alone desirable to separate, questions regarding the ethics of AI, or even more generally, the future of AI from those that are being put forward by those who have consistently—and with increasing rather than decreasing intensity—been marginalized and excluded from dynamics of “growth” and prosperity.[59] AI, therefore, ought not to be seen as a distinct or novel problem which could be addressed in a timeless or context-less space. Instead, as has been shown with regard to the contentious relations between technological progress and socio-economic, cultural, and political evolution, neither can be understood in isolation from another.[60] Much suggests that for the unpacking and interrogating of the normative justifications for the inegalitarian and exclusionary (as well as unsustainable) infrastructure to be transformative, the encadrement of AI through law should not be separated from a critical engagement with the normative and institutional universe in which it continues to evolve. It is striking to what degree the praise of AI’s “achievements,” as well as its innovativeness and promise, echo the arguments that have supported neoliberal policies for the individualization, responsibilization, and commodification of welfare state citizenship for decades.
All these arose on the back of the deep-reaching financialization of public and private goods and their commodification, which turned everything and everyone into an asset and subjected things and people to the logics of the market. With that, whatever remained of progressive aspirations for the political governance of economic and financial transactions eventually ran up empty against the systemic privatization of public goods and, with that, the demonization of a state-based politics of economic governance.[61] It is a tragic-comical déjà-vu that recently again, in February 2025, The Economist would lead with the headline, “The Revolt Against Regulation,” providing a sobering reminder that the age of dismissing democratically made rules and regulations as “red tape” is not over.[62]
Where AI fits in within this volatile political climate is anyone’s guess. The fast proliferation of AI-based applications, which we can currently observe, marks by all accounts a pivotal moment in history. It coincides with a deep concern over the unsustainable costs to humans, the environment of growth economics, and the deterioration of the social fabric in many societies after decades of shrinking public services. The use of AI in this environment is by no means a merely technical question. Rather, AI functions as an additional trigger for a continuing, critical engagement with the political economy of democratic and sustainable governance today.[63]
C. AI’s Jurisdiction? AI’s Ability to Create Its Own Space for Decision-making
In closing these brief reflections, it is helpful to remind ourselves of another facet of AI’s seemingly irreversible takeover of even those functions that would habitually be understood as being at the core of human-based political and ethical decision-making. As Fleur Johns highlights in her compelling new book, the word humanitarian “has been used to characterize everything from the use of military force, practices of military targeting, and the policing of human movement to the delivery of food, healthcare, and other emergency relief, the maintenance of refugee camps, efforts to promote democracy, and much more.”[64] In essence, she writes:
Digital humanitarianism is oriented toward the creation and maintenance of feedback loops designed to transmit signals of scarcity, profusion, need, and capacity among a range of human and nonhuman referents. Digital humanitarian activity aims to make accessible an incessant stream of digital output on or from the world in a format that is readable as “a surface of pure actuality.”[65]
Long predating that of humanitarian intervention and its highly contested justifications and repercussions in recent decades,[66] the field of humanitarian aid highlights the intersections between changing forms and instruments of “assistance,” international cooperation, and privatization of public functions. However, it also spotlights clashes between deep-seated power asymmetries and neo-colonial interventionism in the name of human rights.[67] As such, in analyzing the contemporary practice of humanitarian aid, one effectively engages in carving out the contours of human and non-human agency as it is applied not only to the concrete delivery of a service on the ground, but also to its preparation, triage, and execution.[68] In an enlightening analysis of the role of private data sourcing companies in the humanitarian aid space, Mirca Madianou has recently noted that while:
Big data are seen as representative of the voice of affected people despite significant critiques about the epistemological, ontological, and ethical limitations of crisis data ... big data during disasters often exclude data from those most affected by a crisis, therefore reproducing inequalities. The lack of representativeness and the presence of temporal and other bias render the use of big data during emergencies potentially harmful.[69]
This example further highlights the stakes of AI applications in sensitive social and political fields. As reiterated by Mariano-Florentino Cuéllar and Aziz Huq in their 2022 essay on democratic AI regulation, it is far from evident at this point how to formulate an adequate democratic response to AI.[70] Further, as Jennifer Raso argues, we would be well advised to approach traditional legal and political theory concepts of agency in a new light and through an engagement with critical data studies and new materialism literature:
This work challenges conventional notions of who, or what, might be responsible for outcomes and who has (or ought to have) agential power. Critical new materialism scholarship, in particular, traces how subjects and objects are enmeshed. Its aims are both illustrative and political: the goal is to show how power functions in the world to make critical political economy analyses (and presumably their transformational outcomes) possible.[71]
The gradual and seemingly irreversible shift to ADM in the aforementioned example in the humanitarian aid space—as well as the other examples in loan distribution, recruitment, or border control—presents formidable normative and ethical challenges for a liberal political theory, for which the (rule of) law is a key component in the organization of daily democratic practice.[72] By significantly extenuating the dynamics of a growing number of sensible transactions being assumed by market actors as already, say, in the areas of electronic, “blockchain” contracting or in the increasing use of AI in commercial arbitration,[73] ADM unleashes its dynamics in highly sensitive areas, ranging from bail and imprisonment conditions to decisions regarding access to mortgages, employment, or health care.[74] A key concern here is the difficulty to create adequate forms of public oversight and democratic control of algorithmic governance processes.[75]
While, as noted earlier, such processes of replacing human choice with robots’ ADM remain the subject of critical investigation, there is another dimension to this shift, which is bound to further undermine basic but essential tenets of representative and accountable political legitimacy. This lies not merely in how AI destabilizes systems of accountability and authority, but is also seen to have become a space where decisions are produced (“output”) while the totality of data it draws on (“input”) is impossible to account for.[76] That space is not obviously congruent with what lawyers call legal jurisdiction. AI’s space-creating ability, in which the gathering (“extraction”), processing, and applying of data is possible without almost any human input fundamentally redraws the confines of the realm in which we once learned to engage the differences between government and governance in the context of increasing delegation of public authority (and legitimacy) to private actors.[77] It remains to be seen which lessons we can draw from that experience as we search for adequate regulatory frameworks for the emerging power infrastructures fuelled by AI.[78]
Appendices
Notes
-
[⬧]
“Artificial intelligence can only evolve in a legal framework.”
-
[1]
Daniel Mockle, “La question du droit dans la transformation numérique des administrations publiques” (2019) 49:2/3 Sherbrooke L Rev 223 at 231.
-
[2]
See generally Pankaj Mishra, Age of Anger: A History of the Present, 1st ed (New York: Farrar, Strauss & Giroux, 2017); Karl Mathiesen, “Populists vs. the Planet: How Climate Became the New Culture War Front Line”, Politico (6 November 2022), online: <politico.com> [perma.cc/2PAY-GC4X].
-
[3]
Jens Marquardt & Markus Lederer, “Politicizing Climate Change in Times of Populism: An Introduction” (2022) 31:5 Envtl Politics 735 at 739. See also Julian Jacobs, “The Artificial Intelligence Shock and Socio-Political Polarization” (2024) 199 Technological Forecasting & Soc Change 1 (“AI appears to correspond with political polarization and divergences between winner and loser occupational groups” at 2).
-
[4]
Arnaud Sée, “La régulation des algorithmes : un nouveau modèle de globalisation ?” (2019) 5 Rev fr dr admin 830 at 830.
-
[5]
M Cristina De Stefano, María J Montes-Sancho & Timo Busch, “A Natural Resource-Based View of Climate Change: Innovation Challenges in the Automobile Industry” (2016) 139 J Cleaner Production 1436 at 1436; Lucas Bretschger & Karen Pittel, “Twenty Key Challenges in Environmental and Resource Economics” (2020) 77:4 Envtl & Resource Econs 725 at 739–41.
-
[6]
See generally Mathias Risse, Political Theory of the Digital Age: Where Artificial Intelligence Might Take Us (Cambridge: Cambridge University Press, 2023) at ch 5; Mark Coeckelbergh, “Democracy, Epistemic Agency, and AI: Political Epistemology in Times of Artificial Intelligence” (2023) 3 AI & Ethics 1341 at 1343. See also Federica Russo, Eric Schliesser & Jean Wagemans, “Connecting Ethics and Epistemology of AI” (2024) 39 AI & Society 1585 (“[i]n this new wave of interest, projects, and applications, the question of what one can do with an AI seems to have entered the central stage beside the already studied conceptual or theoretical questions” at 1586). For climate change as a particular epistemological challenge, see e.g. Martin Mahony and Mike Hulme, “Epistemic Geographies of Climate Change: Science, Space and Politics” (2018) 42:3 Progress in Human Geography 395 (“attention to the epistemic geographies of climate change means attention to the uneven geographies of scientific authority, the spatialities of the boundaries drawn between the scientific and the political, and the situated co-production of epistemic and normative commitments” at 396).
-
[7]
See the contributions by Alberto Salazar, Suzie Dunn, Pascale Chapdelaine, Céline Castets Renard, Caroline Lequesne-Roth, Karen Eltis, Charlaine Bouchard, Gideon Christian, Jennifer Raso, and Teresa Scassa.
-
[8]
Sarah Gagnon-Turcotte & Réjean Roy, Prêt pour l’IA: Répondre au défi du développement et du déploiement responsables de l’IA en Québec (Montreal: Quebec Innovation Council, 2024) at x [Prêt pour l’IA].
-
[9]
Ibid (“ce rapport, je l’espère, n’est que le début d’une réflexion plus large sur notre avenir avec l’IA” at vii).
-
[10]
Alain McKenna, “La stratégie québécoise en intelligence artificielle est un échec”, Le Devoir (25 February 2022), online: <ledevoir.com> [perma.cc/R5CW-5JPM].
-
[11]
Prêt pour l’IA, supra note 8 at x; Steve Jacob & Seima Souissi, La fourniture de services publics à l’ère numérique : évolution du rôle et des compétences des employés de première ligne (Quebec: Université Laval, 2020) at 6.
-
[12]
Prêt pour l’IA, supra note 8 at xvi–xvii; see also Elvin Lim & Jonathan Chase, “Interdisciplinarity is a Core Part of AI’s Heritage and is Entwined with its Future”, Times Higher Education (8 November 2023), online: <timeshighereducation.com> [perma.cc/NV2S-VSK8]. Commenting on Norbert Wiener, they note: “By bringing together ideas from fields as diverse as philosophy, psychology, biology, sociology and mathematics, Wiener envisioned a science in which technology was developed to work in concert with humanity, based on a holistic, interdisciplinary understanding of both.”
-
[13]
For the announcement date, see Montreal Declaration on Responsible AI, “The Montréal Declaration for a Responsible Development of Artificial Intelligence” (last visited 3 February 2025), online: <declarationmontreal-iaresponsable.com> [perma. cc/N865-6DRS].
-
[14]
Yoshua Bengio, “The Montreal Declaration: Why We Must Develop AI Responsibly”, The Conversation (5 December 2018), online: <theconversation.com> [perma.cc/8ST7-F5K7].
-
[15]
Prêt pour l’IA, supra note 8 at 70.
-
[16]
Ibid (“at this point, no higher education institution is well equipped to furnish their students in the humanities, in social studies, in natural science, medicine or otherwise with the foundations and skills required for them to benefit from AI in their principal disciplines” at 71).
-
[17]
Christina Pazzanese, “Great Promise but Potential for Peril”, The Harvard Gazette (26 October 2020), online: <news.harvard.edu/gazette> [perma.cc/J3CB-C3MT]; Paola Ricaurte, “Ethics for the Majority World: AI and the Question of Violence at Scale” (2022) 44:4 Media, Culture & Society 726 at 727; Maximilian Kasy, “The Political Economy of AI: Towards Democratic Control of the Means of Prediction” (2023) Oxford Institute for New Economic Thinking, Working Paper No 2023/06, online: <journals. sagepub.com> [perma.cc/JT5A-8SML] at 1:
AI and machine learning are used in an ever wider array of socially consequential settings. This includes labour markets, education, criminal justice, health, banking, housing, as well as the curation of information by search engines, social networks, and recommender systems. There is a need for public debates about desirable directions of technical innovation, the use of technologies, and constraints to be imposed on technologies.
-
[18]
Teresa Scassa, “Canada’s Proposed AI & Data Act — Purpose and Application” (8 August 2022), online (blog): <teresascassa.ca> [perma.cc/8Y9T-BW3D]. This blog post highlights the law’s limitations with regard to its confined applicatory scope for federal institutions and “high impact systems.”
-
[19]
UNESCO, “Artificial Intelligence: Examples of Ethical Dilemmas” (21 April 2023), online: <unesco.org> [perma.cc/3NW4-HNGT]; Luciano Floridi, The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities (Oxford: Oxford University Press, 2023) at 6–10; Government of Canada, Ministry of Innovation, Science and Economic Development, “Code de conduite volontaire visant un développement et une gestion responsables des systèmes d’IA générative avancés” (September 2023), online: <ised-isde.canada.ca> [perma.cc/KCK6-PQNR] (“[t]he systems that are publicly accessible for a range of uses can pose health and safety risks, propagate prejudice and have wider societal repercussions, particularly when used by malicious perpetrators”).
-
[20]
Luciano Floridi, “Digital’s Cleaving Power and Its Consequences” (2017) 30 Philosophy & Tech 123 at 123.
-
[21]
Pieter Verdegem, “Dismantling AI Capitalism: The Commons as an Alternative to the Power Concentration of Big Tech” (2024) 39 AI & Society 727 at 727 [references omitted].
-
[22]
Jocelyn Maclure, “AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind” (2021) 31 Minds & Machines 421 at 422:
[U]nderstanding the reasons or causes that explain why an AI system x decided that y is the right decision or course of action is generally not possible. This is what is now called, often interchangeably, AI’s “black box,” “explainability,” “transparency,” “interpretability,” or “intelligibility” problem.
See also Amelia Fiske, Peter Henningsen & Alena Buyx, “Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology and Psychotherapy” (2019) 21:5 J Medical Internet Research 1 at 2:
Increasingly, artificially intelligent virtual and robotic agents are not only available for relatively low-level elements of mental health support, such as comfort or social interaction, but also perform high-level therapeutic interventions that used to be offered exclusively by highly trained, skilled health professionals such as psychotherapists. Importantly, such “virtual” or “robotic therapists” include an artificially intelligent algorithm that responds independently of any expert human guidance to the client or patient through a virtually embodied presence, such as a face icon, or a physically embodied presence, such as a robotic interface.
-
[23]
Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven, Conn: Yale University Press, 2021) [Crawford, Atlas of AI].
-
[24]
Kate Crawford, “Mining for Data: The Extractive Economy Behind AI”, The Green European Journal (13 June 2023), online: <greeneuropeanjournal.eu> [perma.cc/ 2YX4-S748] (“[t]his is causing a profound industrial reorganisation, where LLMs are not just a new interface, but the new medium through which we will receive and create information in the years to come. It is a very meaningful change, because it comes with a variety of technical and political questions”).
-
[25]
See generally Olga Akselrod, “How Artificial Intelligence Can Deepen Racial and Economic Inequalities”, American Civil Liberties Union (13 July 2021), online: <aclu.org> [perma.cc/6MDV-XXXS]; see also Chaka Chaka, “Digital Marginalization, Data Marginalization, and Algorithmic Exclusions: A Critical Southern Decolonial Approach to Datafication, Algorithms, and Digital Citizenship from the Souths” (2022) 18:3 J E-Learning & Knowledge Society 83 (“for societies in the Souths, especially Black, Indigenous and People of Color (BIPOC) communities, big data and datafication entail marginalization and exclusion from digital citizenship if data is deemed to be a passport to being a citizen in the digitally datafied world” at 84) [reference omitted].
-
[26]
Mark Fathi Massoud, “The Price of Positionality: Assessing the Benefits and Burdens of Self-Identification in Research Methods” (2022) 49:S1 JL & Soc’y S64 at S83; Heidi Siller & Nilüfer Aydin, “Using an Intersectional Lens on Vulnerability and Resilience in Minority and/or Marginalized Groups During the COVID-19 Pandemic: A Narrative Review” (2022) 13 Frontiers in Psychology 1 at 13.
-
[27]
For more on the problematization of attitudes of insensitivity, arrogance, and ignorance by researchers vis-à-vis their research objects, see Rashedur Chowdhury, “Misrepresentation of Marginalized Groups: A Critique of Epistemic Neocolonialism” (2023) 186 J Bus Ethics 553 at 557.
-
[28]
Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018) at 10; Anaelia Ovalle et al, “Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness” (Paper at the 2023 AAAI/ACM Conference on AI, Ethics and Society, Montreal, Quebec, 8-10 August 2023), online: <arxiv.org> [perma.cc/BRP7-FFZG] at 498:
When engaging with intersectionality in different (especially global) contexts, inquiry and praxis take different forms; consequently, one must practice epistemic, personal, and critical reflexivity to be cognizant of context, in order to effectively and holistically advance justice. In AI fairness, social context informs AI context through researcher training and background, model training and deployment, language choices, etc.
-
[29]
See generally Noble, supra note 28 at 16−17; Jennifer Raso, “Digital Border Infrastructure and the Search for Agencies of the State” (2024) McGill SGI Research Papers in Business, Finance, Law & Society, Working Paper No 2024-05, online: <papers.ssrn.com> [perma.cc/C6JR-TDRV] at 1; see also Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown Publishers, 2016) at ch 5; European Union Agency for Fundamental Rights, Bias in Algorithms: Artificial Intelligence and Discrimination (Luxembourg: Publications Office of the European Union, 2022) at 30; Valerio Capraro et al, “The Impact of Generative Artificial Intelligence on Socioeconomic Inequalities and Policy Making” (2024) 3:6 PNAS Nexus 3:6 (“[w]hile generative AI could also offer expanded opportunities to countries in the ‘Global South’, it is unlikely to have much direct impact in the near term due to insufficient investment in prerequisite digital infrastructure, local researchers, and broader digital skills training” at 6); Anit Mukherjee and Lorrayne Porciuncula, “Inclusive Digitalization: Fostering a Global South Partnership” in Anit Mukherjee and Dhruva Jaishankar, eds, Rebalancing Globalization: Perspectives from the Global South (Washington, DC: ORF America, 2024) 103 at 106:
With the expanding global reach of technology platforms, private entities have gained significant normative power in global digital governance over the last decade. Technology giants such as Google, Amazon, Apple, Meta, and Microsoft, based in the United States, and Alibaba, Tencent, and ByteDance, based in China, shape digital markets by setting de facto global standards through their platforms and services, often exceeding the regulatory influence of states and international organizations. Their ability to control data flows, platforms, and user access gives them considerable influence over the digital economy’s trajectory.
-
[30]
Giorgia Marino, “Calculating and Powers: Interview with Kate Crawford”, Renewable Matter (16 January 2024), online: <renewablematter.eu> [perma.cc/5J98-AQQ5].
-
[31]
Prêt pour l’IA, supra note 8 (“[AI] could certainly help raise the productivity level of both workers and organizations, but in the absence of clear markers, it could also promote algorithmic management ill-suited to work (as, for example, in the case of so-called micro-surveillance), which would prove harmful to people’s well-being” at 33).
-
[32]
Centre for Democracy & Technology, “Algorithmic Systems in Education: Incorporating Equity and Fairness When Using Student Data” (12 August 2019), online: <cdt.org> [perma.cc/UQ4P-7FVN].
-
[33]
Michael Da Silva, Colleen M Flood & Matthew Herder, “Regulation of Health-Related Artificial Intelligence in Medical Devices: The Canadian Story” (2022) 55:3 UBC L Rev 635 (exploring “concerns about AI violating privacy rights or the possibility that AI may entrench or create new unfounded biases such that historically marginalized groups continue to receive less or inappropriate care” at 637).
-
[34]
Valerie Schneider, “Locked Out by Big Data: How Big Data, Algorithms and Machine Learning May Undermine Housing Justice” (2020) 52:1 Colum HRLR 251 (“[t]he problem with big data ... is that it must use existing data, which often reflects existing patterns of discrimination and this can perpetuate the unequal status quo” at 259); James A Allen, “The Color of Algorithms: An Analysis and Proposed Research Agenda for Deterring Algorithmic Redlining” (2019) 46:2 Fordham Urb LJ 219 (“[a]lgorithmic redlining and the original era of pencil redlining are synchronized in a crucial way: both result in the exclusion of minority and low-income members of society from access to adequate housing” at 223).
-
[35]
UK, AI Safety Summit, “The Bletchley Declaration by Countries Attending the AI Safety Summit”, Policy Paper (London: AI Safety Summit, 1 November 2023), online: <gov.uk> [perma.cc/3HLR-62G3] (the “recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed” at para 3).
-
[36]
Much of the debate goes back to Alan Turing’s paper. See AM Turing, “Computing Machinery and Intelligence” (1950) 59:236 Mind J 433 (“[t]he original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted” at 422); see also Anne Gerdes & Peter Øhrstrøm, “Issues in Robot Ethics Seen Through the Lens of a Moral Turing Test” (2015) 13:2 J Information Communications & Ethics in Society 98 (“[t]he [Moral Turing Test] questions whether a robot (or a computer system) acts at least according to the ethical standards that are normally considered acceptable in human society. It is important to point out that the development of a system that can pass the [test] will only be one early step towards producing an artificial moral agent” at 99).
-
[37]
Jason Borenstein & Ayanna Howard, “Emerging Challenges in AI and the Need for AI Education” (2021) 1 AI & Ethics 61 (“now we see a rise in the use of these tools by industry, government, and even academic institutions as they deploy AI algorithms to make decisions that alter our lives in direct, and potentially detrimental, ways” at 61).
-
[38]
World Health Organization, “The Impact of COVID-19 on Mental Health Cannot Be Made Light of” (16 June 2022), online: <who.int> [perma.cc/SSD5-PRF8] (“[w]hile mental health needs have risen, mental health services have been severely disrupted” at para 4).
-
[39]
Jonathan Crary, Scorched Earth: Beyond the Digital Age to a Post-Capitalist World (New York: Verso Books, 2022) at 2.
-
[40]
Deborah Lupton, “How do Data Come to Matter? Living and Becoming With Personal Data” (2018) 5:2 Big Data & Society 1; Marco Gui & Moritz Büchi, “From Use to Overuse: Digital Inequality in the Age of Communication Abundance” (2021) 39:1 Soc Science Computer Rev 3 (“[a]n unintended consequence of increasing digitization and the permeation of digital communication in public, private, and professional activities are feelings of communication overload and information and communication technology (ICT) overuse” at 3).
-
[41]
Wally Smith & Greg Wadley, “Why Am I Online? Research Shows It’s Often About Managing Emotions”, The Conversation (16 July 2023), online: <theconversation.com> [perma.cc/F4M8-BL27]; Kim M Caudwell, “Spending Too Much Time on Social Media and Doomscrolling? The Problem Might Be FOMO”, The Conversation (28 May 2024), online: <theconversation.com> [perma.cc/N7KK-KPZQ]; Laura Salisbury, “On Not Being Able to Read: Doomscrolling and Anxiety in Pandemic Times” (2023) 37:6 Textual Practice 887 (“[o]ne key mental and somatic experience of the waiting time of the COVID-19 pandemic, but that has also has seemed (sic) insistent in the collective mental life of late liberalism over many years, has been anxiety” at 889).
-
[42]
Bhakti Sharma, Susanna S Lee & Benjamin K Johnson, “The Dark at the End of the Tunnel: Doomscrolling on Social Media Feeds” (2022) 3:1 Tech, Mind, & Behavior 1 at 1; Annette N Markham, “Pattern Recognition: Using Rocks, Wind, Water, Anxiety, and Doom Scrolling in a Slow Apocalypse (To Learn More About Methods for Changing the World)” (2021) 27:7 Qualitative Inquiry 914 (“[s]ome are calling it ‘The Great Pause’ and this makes sense. But today’s ‘pause’ is not a slowing down or a waiting. It’s a slow drowning by inertia” at 914).
-
[43]
Verdegem, supra note 21 at 729; Kerrin Artemis Jacobs, “Digital Loneliness — Changes of Social Recognition through AI Companions” (2024) 6 Frontiers in Digital Health 1 (“[i]nteractions that cannot reproduce the respective forms of social inclusiveness and integration can be assessed not only as potentially unethical and/or legally problematic, but also as contributing to a perpetuation of a social malpractice that alienates people and defines the status of loneliness as a socially precarious condition” at 4).
-
[44]
Nick Couldry & Ulises Ali Mejias, “The Decolonial Turn in Data and Technology Research: What Is at Stake and Where Is It Heading?” (2023) 26:4 Information, Communication & Society 786 at 787; Verdegem, supra note 21 at 730–31.
-
[45]
Konrad Schlick, “Legal Frameworks for Data Governance: Tackling Algorithmic Bias and Discrimination in the Global Economy” [unpublished, archived at ResearchGate], online: <researchgate.net> [perma.cc/ZDZ2-H8BY] at 1.
-
[46]
Sylvia Lu, “Data Privacy, Human Rights, and Algorithmic Opacity” (2022) 110 Cal L Rev 2087 at 2090:
In many cases, AI systems have become the power behind the throne—they lurk in the background, yet make crucial decisions through predictive analysis of personal data. Firms have used AI to decide what should be seen in online search results or even who should be given employment opportunities, and they do so at the cost of data privacy.
See also Peter K Yu, “The Algorithmic Divide and Equality in the Age of Artificial Intelligence” (2020) 72:2 Florida L Rev 331 (“an emerging and ever-widening ‘algorithmic divide’ now threatens to take away the many political, social, economic, cultural, educational, and career opportunities provided by machine learning and artificial intelligence” at 331).
-
[47]
For a discussion of the concerns of AI’s impact on democratic processes, see Risse, supra note 6 at 59–62. See also Ben Buchanan & Andrew Imbrie, The New Fire: War, Peace, and Democracy in the Age of AI (Cambridge, Mass: MIT Press, 2022); Peer Zumbansen, “Runaway Train? Decentralized Finance and the Myth of the Private Platform Economy” (2023) 14:4 Transnational Leg Theory 413.
-
[48]
Viktor Dörfler & Giles Cuthbert, “Dubito Ergo Sum: Exploring AI Ethics” (Paper delivered at the 57th Hawaii International Conference on System Sciences, Honolulu: Hawaii, 2024) online: <philarchive.org> [perma.cc/22NK-XYDT] (“[i]mportantly for AI ethics, if doubt seem to be essential for our moral judgments, what are the implications of doubt-less AI?” at 5590–93).
-
[49]
Norbert Wiener, “Some Moral and Technical Consequences of Automation” (1960) 131:3410 Science 1355 at 1358.
-
[50]
Vincent C Müller, “Ethics of Artificial Intelligence and Robotics” in Edward N Zalta and Uri Nodelman, eds, Stanford Encyclopedia of Philosophy, online: <plato.stanford. edu> [perma.cc/75XR-SX8P] (“AI somehow gets closer to our skin than other technologies ... Perhaps this is because the project of AI is to create machines that have a feature central to how we humans see ourselves, namely as feeling, thinking, intelligent beings”). See also Dalton Delan, “Computer, Ergo Sum: Reaching for, but Not Yet Grasping, the ‘First Principle’ of AI Minds” The Berkshire Eagle (2 February 2024), online: <berkshireeagle.com> [perma.cc/24NG-JSX7].
-
[51]
Aaron Martin et al, “Digitisation and Sovereignty in Humanitarian Space: Technologies, Territories, and Tensions” (2023) 28:3 Geopolitics 1362 at 1365; Jasmina Tacheva & Srividya Ramasubramanian, “AI Empire: Unraveling the Interlocking Systems of Oppression in Generative AI’s Global Order” (2023) 10:2 Big Data & Society 1 (“the interlocking roots of AI Empire are deeply steeped in heteropatriarchy, racial capitalism, white supremacy, and coloniality. Just as AI Empire is distributed, networked, and intersectional, so too are the struggles people, communities, and coalitions have been waging against its dominance” at 2); Louise Amoore, “The Deep Border” (2024) 109 Political Geography 1 at 3.
-
[52]
Crawford, Atlas of AI, supra note 23 at ch 6; Shoshana Zuboff, “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization” (2015) 30:1 J Information Tech 75 at 83 [Zuboff, “Big Other”]; Shoshana Zuboff, “Surveillance Capitalism or Democracy? The Death March of Institutional Orders and the Politics of Knowledge in Our Information Civilization” (2022) 3:3 Organization Theory 1 at 3–4. See also Mike Zajko, “Artificial Intelligence, Algorithms, and Social Inequality: Sociological Contributions to Contemporary Debates” (2022) 16:3 Sociology Compass 1 at 2–3 [references omitted]:
[T]he disposition of automated systems to uphold the existing social order has been described as a ‘conservative’ tendency, but among AI researchers and developers, the general understanding is that people (or society) exhibit various ‘biases’ which are then reproduced in automated systems. This is particularly the case for today’s dominant forms of AI or ‘machine learning’ algorithms, which must be ‘trained’ using datasets that reflect human judgments, priorities, and conceptual categories.
-
[53]
Julie E Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism (New York: Oxford University Press, 2019) at 203–04; see also Marion Fourcade & Fleur Johns, “Loops, Ladders and Links: The Recursivity of Social and Machine Learning” (2020) 49:5/6 Theory & Society 803 (“[w]hat ... is the glue that holds things together at the automated interface of online and offline lives? What kind of subjectivities and relations manifest on and around social network sites, for instance?” at 804).
-
[54]
Algorithm Watch, “The Algorithmic Administration” (13 May 2024), online (blog): <algorithmwatch.org> [perma.cc/E6WN-7A9K] (“[a]ssessing algorithmic systems’ impact must start with transparency measures—if only to enable those affected to defend themselves against automated decisions. We often don’t even know if authorities leave decisions to algorithms”); see generally Hanne Hirvonen, “Just Accountability Structures — A Way to Promote the Safe Use of Automated Decision-making in the Public Sector” (2024) 39 AI & Society 155; Cary Coglianese & David Lehr, “Transparency and Algorithmic Governance” (2019) 71:1 Admin L Rev 1; Rowena Rodrigues, “Legal and Human Rights Issues of AI: Gaps, Challenges, and Vulnerabilities” (2020) 4 J Responsible Tech 1 at 2; Hannah Bloch-Wehba, “Algorithmic Governance from the Bottom Up” (2022) 48:1 BYUL Rev 69 (“[a] cottage industry of technologies and techniques—biometric surveillance, license plate readers, predictive policing, and social media monitoring, to name just a handful—are transforming law enforcement and expanding its capacity” at 72).
-
[55]
Katharina Pistor, “Statehood in the Digital Age” (2020) 27:1 Constellations 3 at 9; see also Marieke de Goede, “Finance/Security Infrastructures” (2021) 28:2 Rev Intl Political Economy 351 (“[r]ather than the mundane plumbing for global financial transactions or the background-stage for high power politics, financial market infrastructures are inscribed with politics and global inequities from their very beginnings” at 353).
-
[56]
Jathan Sadowski, “When Data Is Capital: Datafication, Accumulation, and Extraction” (2019) 6:1 Big Data & Society 1 (“data ... is a core component of political economy in the 21st century” at 1); Christopher W Chagnon et al, “From Extractivism to Global Extractivism: The Evolution of an Organizing Concept” (2022) 49:4 J Peasant Studies 760 at 775.
-
[57]
Ken Hou-Lin & Donald Tomaskovic-Devey, “Financialization and U.S. Income Inequality, 1970−2008” (2013) 118:5 Am J Sociology 1284 at 1286−87; Kean Birch, DT Cochrane & Callum Ward, “Data as Asset? The Measurement, Governance, and Valuation of Digital Personal Data by Big Tech” (2021) 8:1 Big Data & Society 1 at 2; Thomas Beauvisage & Kevin Mellet, “Datassets: Assetizing and Marketizing Personal Data” in Kean Birch & Fabian Muniesa, eds, Assetization: Turning Things into Assets in Technoscientific Capitalism (Cambridge, Mass: MIT Press, 2020) 75 (“[t]o extract a share of this new value, a part of the digital economy considered personal data as a commodity, a standard elementary tradable good like oil, close to a currency” at 79).
-
[58]
Zuboff, “Big Other”, supra note 52 at 81; Ute Tellmann, Veit Braun & Barbara Brandl, “The Challenges of Assets: Anatomy of an Economic Form” (2024) 53:1 Economy & Society 1 (“[t]he increasing importance of rent—in both senses—is the most visible evidence of assetization in everyday life. We no longer purchase things but instead acquire rights of use against fees whilst all manner of goods and services are scrutinized in terms of their potential for transformation into titles for future payment streams” at 2) [references omitted].
-
[59]
Couldry & Mejias, supra note 44 at 787; Tacheva & Ramasubramanian, supra note 51 at 3.
-
[60]
Jakob Madsen & Holger Strulik, “Technological Change and Inequality in the Very Long Run” (2020) 129 European Econ Rev 1 at 10, 23; Anton Korinek, Martin Schindler & Joseph E Stiglitz, “Technological Progress, Artificial Intelligence, and Inclusive Growth” (2021) International Monetary Fund, Working Paper No 21/166 (“[f]or many decades, there was a presumption that advances in technology would benefit all—embodied by the trickle-down dogma that characterized neoliberalism. And for some time, this presumption was in fact justified. ... However, over the past half-century, output growth and median worker incomes started to decouple” at 4).
-
[61]
See generally Kevin Skerrett, “Pension Funds, Privatization, and the Limits to ‘Workers Capital’” (2018) 99:1 Studies in Political Economy 20; Kevin Skerrett et al, eds, The Contradictions of Pension Fund Capitalism (Champaign, Ill: University of Illinois at Urbana-Champaign for the Labor and Employment Relations Association, 2017) at 20–21; see Doug Henwood, After the New Economy (New York: New Press, 2003) at 1.
-
[62]
Leader, “The Revolt Against Regulation”, The Economist (1 February 2025) (“Done right, the anti-red-tape revolution could usher in greater freedom, faster economic growth, lower prices and new technology” at 11).
-
[63]
See also Mariano-Florentino Cuéllar & Aziz Z Huq, “The Democratic Regulation of Artificial Intelligence” in Glenn Bass, ed, Data and Democracy (New York: Knight First Amendment Institute at Columbia University, 2022) (“we need to decide what it is that a democratic system should focus upon when intervening in AI systems” at 17).
-
[64]
Fleur Johns, #Help: Digital Humanitarianism and the Remaking of the International Order (New York: Oxford University Press, 2023) at 5 [footnote omitted].
-
[65]
Ibid at 7; see also Eleanor Bird et al, The Ethics of Artificial Intelligence: Issues and Initiatives (Brussels: European Union, 2020) at 7:
For a physical robot its environment is the real world, which can be a human environment (for social robots), a city street (for an autonomous vehicle), a care home or hospital (for a care or assisted living robot), or a workplace (for a workmate robot). The ‘environment’ of a software AI is its context, which might be clinical (for a medical diagnosis AI), or a public space—for face recognition in airports, for instance, or virtual for face recognition in social media. But, like physical robots, software AIs almost always interact with humans, whether via question and answer interfaces: via text for chatbots, or via speech for digital assistants on mobile phones (i.e. Siri) or in the home (i.e. Alexa).
-
[66]
Mohammed Ayoob, “Third World Perspectives on Humanitarian Intervention and International Administration” (2004) 10:1 Global Governance 99; Mahmood Mamdani, “Responsibility to Protect or Right to Punish?” (2010) 4:1 J Intervention & Statebuilding 53.
-
[67]
Anne Orford, “Muscular Humanitarianism: Reading the Narratives of the New Interventionism” (1999) 10:4 Eur J Intl L 679 at 682, 709–10; Martti Koskenniemi, “‘The Lady Doth Protest Too Much’: Kosovo, and the Turn to Ethics in International Law” (2002) 65:2 Mod L Rev 159 at 172–73.
-
[68]
Mirca Madianou, “Technocolonialism: Digital Innovation and Data Practices in the Humanitarian Response to Refugee Crises” (2019) 5:3 Soc Media + Society 1: (“[d]atafication—the quantification of processes that were previously experienced qualitatively—and digitization are combined with increasing marketization, professionalization, pressure for humanitarian accountability, and, crucially, the dynamic entry of the private sector in the humanitarian field” at 2).
-
[69]
Ibid at 4.
-
[70]
Cuéllar & Huq, supra note 63 (“[w]hat then does it mean to regulate AI systems, particularly given the vast range of possible types and levels of intensity of interventions in related domains, ranging from public health to environmental protection?” at 20).
-
[71]
Supra note 29 at 4.
-
[72]
S Kate Devitt et al, “Developing a Trusted Human-AI Network for Humanitarian Benefit” (2023) 4:1/2/3 Digital War 1; Mayra Margffoy, “AI for Humanitarians: A Conversation on the Hope, the Hype, the Future”, The New Humanitarian (5 September 2023), online: <thenewhumanitarian.org> [perma.cc/R5FF-RGE2] (“[w]e’re putting a lot of attention and emphasis into making sure that humans are continuously a part of this”).
-
[73]
George A Bermann, “The Future of International Commercial Arbitration” in CL Lim, ed, The Cambridge Companion to International Arbitration (Cambridge, UK: Cambridge University Press, 2021) 138 at 145; Ryan Abbott & Brinson S Elliott, “Putting the Artificial Intelligence in Alternative Dispute Resolution: How AI Rules Will Become ADR Rules” (2023) 4:3 Amicus Curiae 685 at 699; Derick H Lindquist & Ylli Dautaj, “AI in International Arbitration: Need for the Human Touch” (2021) 2021:1 J Disp Resol 39 at 40, 45–47.
-
[74]
Nadiyah J Humber, “A Home for Digital Equity: Algorithmic Redlining and Property Technology” (2023) 111 Cal L Rev 1421 at 1439-41; Emmanuel Martinez & Lauren Kirchner, “The Secret Bias Hidden in Mortgage-Approval Algorithms”, The Markup (25 August 2021), online: <themarkup.org> [perma.cc/MM5Q-RA2K]; Stefan Larsson, James Merricks White & Claire Ingram Bogusz, “The Artificial Recruiter: Risks of Discrimination in Employers’ Use of AI and Automated Decision-Making” (2024) 12 Soc Inclusion 1 (“[r]esearch shows that AI and ADM are increasingly used in worker recruitment, specifically through LinkedIn” at 2); Núria Vallès-Peris & Júlia Pareto, “Artificial Intelligence as a Mode of Ordering: Automated Decision Making in Primary Care” (2024) Information, Communication & Society 1.
-
[75]
Ari Waldman & Kirsten Martin, “Governing Algorithmic Decisions: The Role of Decision Importance and Governance on Perceived Legitimacy of Algorithmic Decisions” (2022) 9:1 Big Data & Society 1 (“[t]ransparency is not a sufficient governance model for algorithmic decision-making, countering arguments for greater transparency as a governance solution. ... this study reinforces the urgent need to develop governance structures before algorithmic decision-making becomes omnipresent” at 12); see also Hannah Bloch-Wehba, “Algorithmic Governance from the Bottom Up” (2022) 48:1 BYUL Rev 69 (“[a]lgorithmic governance forges ahead without the consent of those whom it most profoundly affects, whether they are members of affected communities or workers asked to build oppressive technology. And, once adopted, algorithmic mechanisms reinforce existing hierarchies and justify continuing disparities” at 74) [footnote omitted].
-
[76]
Yavar Bathaee, “The Artificial Intelligence Black Box and the Failure of Intent and Causation” (2018) 31:2 Harv JL & Tech 889 at 901; Thomas Wischmeyer, “Artificial Intelligence and Transparency: Opening the Black Box” in Thomas Wischmeyer & Timo Rademacher, eds, Regulating Artificial Intelligence (Cham, Switzerland: Springer Nature Switzerland, 2020) 75 at paras 3, 13.
-
[77]
Tanja A Börzel & Thomas Risse, “Governance Without a State: Can It Work?” (2010) 4:2 Regulation & Governance 113 (“[t]he participation of non-state actors in public policymaking was supposed to improve both the quality of public policies and the effectiveness of their implementation since rule addressees could bring in their expertise and their interests” at 113).
-
[78]
See generally Bird, supra note 65.