Article body

I. Artificial Intelligence as Epistemological and Political Challenge

The debate around artificial intelligence (AI) has become so multifaceted and multidimensional that an effort to situate a particular sector’s or discipline’s place within it requires several steps of stock-taking, contextualization, and translation across and between different rationalities. Substantive strands of discussion have addressed the technical aspects of machine learning, coding, and automated decision-making (ADM), while other debates focus on the implications of AI-driven problem-solving and organizing for questions of political authority, legitimacy, and privacy.[1] Other discussions have highlighted the vast scope of AI technology’s application across private and public sector services and institutions, which raises complex issues of transparency and accountability. Notwithstanding these differentiations, it is the ubiquity of AI and the assumption that its growth and expansion are inevitable which continue to prompt questions about the normative, ethical significance of AI—not only in relation to the modes of societal governance, but also to human nature.

As is typical in moments of high stakes and heightened complexity, one tends either to oversimplify in the name of rendering something “manageable” or to run away and lose oneself in hyperbole or outright delusion.[2] Echoing the burgeoning variants of views, fears, hopes, and doom pronouncements around AI, political scientists studying the implications of climate change have observed “a striking tension between attempts to depoliticize climate change by referencing science-based trajectories and technological fixes, while at the same time, social movements and other political actors openly politicize climate change by relating it to issues of justice, societal struggles, and political order.”[3] As for law, the obstacles are equally overwhelming, revealing not only the challenge of how to relate legal instruments to a new quality of tech-driven forms of contracting, service delivery, surveillance, and information storing, but also prompting lawyers to reflect on the nature and adequacy of law itself in the presence of artificial intelligence. As Arnaud Sée notes: “Regulation itself is not so much a question of law but one regarding the discourse around law.”[4]

It is on this scale that AI in many ways overlaps and intersects with other system challenges the law faces. A key one to consider here is climate change, which presents humanity with arguably overwhelming questions with regard to macro-economic policy-making; the regulation of natural resource extraction, manufacturing, and transport; as well as individual and collective consumer behavior and education.[5] In very similar, comprehensive ways, AI too must be understood as a political and epistemological challenge, sharing with climate change many of its characteristics in terms of technical and normative complexity.[6] Both are among the most existential challenges humanity has faced. The following reflections remain preliminary and may merely draw out, in broad strokes, some of the normative and epistemological implications of the type of specific investigations at the heart of the contributions to this issue of the McGill Law Journal—and within the wider research on AI—as it continues to proliferate at breathtaking speed.[7]

A. Cogito Ergo Sum in the Age of Artificial Intelligence

In writing the preamble to the Prêt pour l’IA report (Report)—authored by Sarah Gagnon-Turcotte and Réjean Tremblay and published in Montreal in January 2024—Luc Sirois, the head of Quebec’s Innovation Council (Council), emphasized that artificial intelligence cannot evolve outside a legal frame.[8] The comprehensive research, which constitutes the basis of the Report, was conducted over nine months and incorporates input and expertise from more than 250 participants and contributors. The Report offers a wealth of pertinent observations which complement a growing body of tangible policy work, and which has the immediate benefit of concreteness and applicability. Given the complexity of the Council’s findings, no single recommendation is allotted more prominence than another. Even so, Sirois’s suggestion to understand the study as only the beginning of a larger reflection about our future with AI[9] can serve as a first key to unlocking the potential of the Report, which also touches on the concrete economic benefits of public—including provincial—investments in AI.[10] The ambitious four pillars on which that reflection should rest illustrate the stakes of this undertaking. Hereto, the Council highlights the crucial importance of a) mapping the densifying landscape of AI applications, b) the significance of enhancing and improving the education around the uses and challenges of AI, c) the intensification of research into AI, and d) the securing of Quebec’s commitment to support the continued digitization of public services; for example, in the health and transportation sectors.[11]

The notion of “encadrement”—framing—is the Report’s red thread and arguably drives its core intervention. It gains even more importance in light of the Report’s insistence on the interdisciplinary character of AI and the ensuing need for governmental support to facilitate and strengthen interdisciplinary collaborations between academics, industry partners, experts, and civil society members.[12] Prêt pour l’IA pursues the creation of a robust and explicitly legal framework for artificial intelligence while making explicit reference to the widely noted “Montreal Declaration for a Responsible Development of Artificial Intelligence,” announced on 3 November 2017.[13] The latter has since been regarded as an important milestone in arguing for the need to understand the engagement with AI as a collective, societal, and democratic challenge.[14] The Report also recognizes the distinctly interdisciplinary nature of AI[15] and provides an excellent map of existing and emerging AI-focused legislation, institutional initiatives, and investment-related advances on the provincial and federal levels, as well as in the United States and the European Union. Even so, its findings are not always reassuring, especially when it comes to the existing deficiencies with regard to a robust, cross-disciplinary, and multi-departmental culture of research and education on AI.[16]

As has been the case with the “Montreal Declaration,” the Report contributes to a now global debate around “AI ethics,” which, due to its potentially infinite range of issues under consideration, in and of itself is an inherently precarious and volatile undertaking. With AI assuming an ever-growing role in both public and private decision-making processes, the normative implications of AI ethics are significant. An engagement with the ethics of AI-driven governance requires a close scrutiny of the political economy of how power is allocated, used, and held accountable.[17] In aspiring to provide policy-makers and regulators with concrete, tangible, and thoroughly scrutinized recommendations, AI ethics must build on ongoing, critical, and insightful assessments of an evolving AI governance landscape.[18] Due to AI applications’ enormous impact on all aspects of public and private life, there is no “light touch” approach to engaging with AI and its significance.[19] As per Luciano Floridi:

The digital “cuts and pastes” reality, in the sense that it couples, decouples, or recouples features of the world—and therefore our corresponding assumptions about them—which we never thought could be anything but indivisible and unchangeable. It splits apart and fuses the “atoms” of our experience and culture, so to speak. It changes the bed of the river, to use a Wittgensteinian metaphor.[20]

This suggests that what continues to be needed are ambitious and bold investigations into the ethical questions that a transformative and arguably unlimited technology such as AI presents us with. As Pieter Verdegem recently remarked:

The confluence of factors—the availability of powerful computing capacity, new techniques in machine/deep learning leading to more sophisticated algorithms and the growing availability of data with which to train these algorithms—enable AI to be deployed far more extensively. AI now seems ready to have a deep impact on our society and economy.[21]

As machine learning technology continues to advance, questions regarding transparency, accountability, or even “explainability” will only become more pressing.[22] Similarly, Kate Crawford, author of the illuminating study, Atlas of AI (2022),[23] highlights that large language models should be seen as the most important technological innovation since the World Wide Web.[24]

B. Voice and Agency in AI Discourses

With that, another problem presents itself. The taken-for-granted first person in proliferating policy lectures, white papers, and reports on the risks and benefits of AI wrongly assumes voice and agency for those who have either never been on the radar, or are otherwise intentionally excluded from the deliberative discourse universe which many AI and AI-ethics discussions seem to take for granted.[25] Unpacking the silencing uses of “we” in these and similar discussions[26] requires a critical inquiry into the speakers’ and spokespersons’ unreflected positionalities.[27] In much of the literature on burgeoning AI applications and their associated ethical challenges, there is a habitual proposition of a universal, all-inclusive vantage point in terms of speaking of a “we,” “our future,” or ever—more hyperbolically still—“the future of humanity.” Such assertions repeat long-standing practices of marginalization and exclusion.[28] The uncritical use of “we” and “us” reveals a notion and preconception of neutrality and universality that stands in stark contrast to AI’s highly uneven interventions in different communities.[29] In a recent interview, Crawford underlined that:

AI systems are profoundly material. But the imaginaries of AI are ethereal: visions of immaterial code, abstract mathematics, and algorithms in the cloud. On the contrary, AI is made of minerals, energy, and vast amounts of water. Data centers generate immense heat that contributes to climate change and the construction and maintenance of AI depends on underpaid labor in the Global South.[30]

The sheer scope of AI’s broad societal impact poses significant challenges in terms of where to direct any ethical demands. Given they are alive in philosophical and policy discussions, they simultaneously shape concrete and tangible interventions in the spheres of work and production,[31] education,[32] health,[33] and housing,[34] to name just a few.[35] Such concerns echo the prevailing sentiment of destabilizing personal fatigue which feeds into a state of collective disillusionment and apathy with regard to shrinking prospects of a more equitable and sustainable future. It therefore does not come as a surprise that ethical anxieties already accompanied the very first iterations of AI related machine learning,[36] and seem to be expanding today in breathtaking tandem with the staggering proliferation of AI applications.[37] It is wise to take seriously the mental health costs of private lives increasingly shaped by AI and significantly aggravated during the pandemic.[38] As per Jonathan Crary:

For the majority of the earth’s population on whom it has been imposed, the internet complex is the implacable engine of addiction, loneliness, false hopes, cruelty, psychosis, indebtedness, squandered life, the corrosion of memory, and social disintegration. All of its touted benefits are rendered irrelevant or secondary by its injurious and sociocidal impacts.[39]

While on a colloquial, quotidian level, everyone speaks about “too much screen time,” “listening phones,” and the sheer ubiquity of data-collecting devices, the digitalization of human interaction and the ways we spend our time online have long become the topic of intense scholarly inquiry.[40] The global COVID-19 pandemic amplified and aggravated already existing trends towards screen addiction, isolation, and alienation.[41] “Doomscrolling,” a term “coined in 2018 ... [referring] to a state of media use typically characterized as individuals persistently scrolling through their social media newsfeeds with an obsessive focus on distressing, depressing, or otherwise negative information,” captures a particularly dark place in people’s experience of their warped interactions with others while persistently on a screen.[42] AI has, and will continue to, play a key role in the constitution of such dark places, contributing to a time- and energy-devouring immersion into accelerating moving images and information in a context of “digital capitalism.”[43] As critical data scholars have argued, the key in this stage of economic development is the profound degree of data extractivism, which unfolds through the sheer unlimited mining of personal data through technological means and their use for a wide variety of commercial, military, and security applications.[44] The high degree of societal penetration of digital technology into every aspect of people’s lives, including “finance, healthcare, criminal justice, and hiring,”[45] has become a defining feature not only of the economic system, but of the totality of social relations as such, raising particular fears around privacy protection and AI’s unevenly distributed socioeconomic benefits.[46]

II. Computer Ergo Sum

AI applications themselves prompt intellectual and emotional responses ranging from ignorance to consternation, fear to terror, and wonder to renewed religious belief.[47] In all that, AI appears to break down the boundaries on which much of Western, post-Descartian human understanding has come to rest—cogito ergo sum.[48] As Norbert Wiener dryly remarked in his famous 1960 essay:

Disastrous results are to be expected not merely in the world of fairy tales but in the real world wherever two agencies essentially foreign to each other are coupled in the attempt to achieve a common purpose. If the communication between these two agencies as to the nature of this purpose is incomplete, it must only be expected that the results of this cooperation will be unsatisfactory.[49]

Since its inception, AI has not only challenged and—increasingly believably—threatened to undermine the differences between human and robot, but it has also turned on its head human-based conceptions of autonomy and (rational, ethical, and accountable) decision-making.[50]

What results is more than a technology in the sense of an even complex array of instruments. Rather, AI emerges as a spatialization of processes of human-non-human interaction, as well as machine-machine interaction, in which it becomes a question of pressing epistemological and political urgency of “who is in charge.” Spatialization here refers to the creation—through technological means—of spaces which exist independently of, say, legal jurisdictional or politically defined territorial boundaries. The encroachment of AI into different spaces—for example, in the areas of delivery and administration of humanitarian aid—raises deep concerns regarding the transparency and accountability of power exercised within these spaces.[51] These concerns have been addressed with growing intensity by a wide range of humanities scholars who remain skeptical of pronouncements that democratic practices can survive AI unscathed.[52]

A. AI: For, Against, and (Used) by Lawyers—the “Framing” Prerogative

For lawyers, the finding of answers to this challenge is an urgent concern as the AI-driven and AI-based processes of ADM constitute a governance regime that arguably eludes many well-tested conceptual and doctrinal approaches to both problems of authority and legitimacy.[53] A decisive element here is the much lamented and yet by now deeply entrenched shift from public to private agencies for a growing (and, perhaps, uncontainable) range of services and institutional processes. While the use of algorithmic governance in the form of automated decision-making and other uses of AI poses distinct problems of review and accountability, these challenges are exacerbated in the private sector, not least because the actual locus of decision-making power can be harder to identify than in a formally structured, public institutional infrastructure.[54] The emerging spaces of AI cut across legal and political boundaries and challenge existing understandings of political authority and democratic legitimacy. As Katharina Pistor argues:

[D]igital power needs people to produce the raw material on which its power rests, but is less dependent on territory. It is exercised not through physical coercion, but by surveilling and shaping the behavior of individuals and groups indirectly. In doing so, digital power benefits from information asymmetries between the data harvesters and their clients on one hand, and the data producers who also serve as targeted customers, on the other.[55]

In fact, it is a political economy lens that renders visible the degrees of continuity as well as amplification of the application of AI to functions of economic and financial governance.[56] For example, what comes to the surface are glimpses of the deeply transformative dynamics of assetization, the roots of which lie in the all-encompassing financialization unleashed since the 1970s[57] and whereby any good, service, and the agents themselves—along with their state of health, income and future prospects—eventually became data points for an insatiable information-processing and value-extracting infrastructure.[58]

B. Financialization as Historical Inheritance and Condition

It follows that when one speaks of the challenges arising from AI for law, it is imperative to acknowledge the challenges of doing justice to AI as a distinct realm of technological innovation. Additionally, it is crucial to assess both its integration into continuing adaptations of production, governance, and information processes and its disruption of democratic processes of deliberation and accountability. To again reference the Quebec Report of January 2024, it is the encadrement—that is, the framing of AI, its evolution, and its applications—that the Report’s authors argue must be approached as part of a wider process of critical engagement with an economy which over time has structurally entrenched pre-existing and privileged positions of power and access.

It is hard to imagine, let alone desirable to separate, questions regarding the ethics of AI, or even more generally, the future of AI from those that are being put forward by those who have consistently—and with increasing rather than decreasing intensity—been marginalized and excluded from dynamics of “growth” and prosperity.[59] AI, therefore, ought not to be seen as a distinct or novel problem which could be addressed in a timeless or context-less space. Instead, as has been shown with regard to the contentious relations between technological progress and socio-economic, cultural, and political evolution, neither can be understood in isolation from another.[60] Much suggests that for the unpacking and interrogating of the normative justifications for the inegalitarian and exclusionary (as well as unsustainable) infrastructure to be transformative, the encadrement of AI through law should not be separated from a critical engagement with the normative and institutional universe in which it continues to evolve. It is striking to what degree the praise of AI’s “achievements,” as well as its innovativeness and promise, echo the arguments that have supported neoliberal policies for the individualization, responsibilization, and commodification of welfare state citizenship for decades.

All these arose on the back of the deep-reaching financialization of public and private goods and their commodification, which turned everything and everyone into an asset and subjected things and people to the logics of the market. With that, whatever remained of progressive aspirations for the political governance of economic and financial transactions eventually ran up empty against the systemic privatization of public goods and, with that, the demonization of a state-based politics of economic governance.[61] It is a tragic-comical déjà-vu that recently again, in February 2025, The Economist would lead with the headline, “The Revolt Against Regulation,” providing a sobering reminder that the age of dismissing democratically made rules and regulations as “red tape” is not over.[62]

Where AI fits in within this volatile political climate is anyone’s guess. The fast proliferation of AI-based applications, which we can currently observe, marks by all accounts a pivotal moment in history. It coincides with a deep concern over the unsustainable costs to humans, the environment of growth economics, and the deterioration of the social fabric in many societies after decades of shrinking public services. The use of AI in this environment is by no means a merely technical question. Rather, AI functions as an additional trigger for a continuing, critical engagement with the political economy of democratic and sustainable governance today.[63]

C. AI’s Jurisdiction? AI’s Ability to Create Its Own Space for Decision-making

In closing these brief reflections, it is helpful to remind ourselves of another facet of AI’s seemingly irreversible takeover of even those functions that would habitually be understood as being at the core of human-based political and ethical decision-making. As Fleur Johns highlights in her compelling new book, the word humanitarian “has been used to characterize everything from the use of military force, practices of military targeting, and the policing of human movement to the delivery of food, healthcare, and other emergency relief, the maintenance of refugee camps, efforts to promote democracy, and much more.”[64] In essence, she writes:

Digital humanitarianism is oriented toward the creation and maintenance of feedback loops designed to transmit signals of scarcity, profusion, need, and capacity among a range of human and nonhuman referents. Digital humanitarian activity aims to make accessible an incessant stream of digital output on or from the world in a format that is readable as “a surface of pure actuality.”[65]

Long predating that of humanitarian intervention and its highly contested justifications and repercussions in recent decades,[66] the field of humanitarian aid highlights the intersections between changing forms and instruments of “assistance,” international cooperation, and privatization of public functions. However, it also spotlights clashes between deep-seated power asymmetries and neo-colonial interventionism in the name of human rights.[67] As such, in analyzing the contemporary practice of humanitarian aid, one effectively engages in carving out the contours of human and non-human agency as it is applied not only to the concrete delivery of a service on the ground, but also to its preparation, triage, and execution.[68] In an enlightening analysis of the role of private data sourcing companies in the humanitarian aid space, Mirca Madianou has recently noted that while:

Big data are seen as representative of the voice of affected people despite significant critiques about the epistemological, ontological, and ethical limitations of crisis data ... big data during disasters often exclude data from those most affected by a crisis, therefore reproducing inequalities. The lack of representativeness and the presence of temporal and other bias render the use of big data during emergencies potentially harmful.[69]

This example further highlights the stakes of AI applications in sensitive social and political fields. As reiterated by Mariano-Florentino Cuéllar and Aziz Huq in their 2022 essay on democratic AI regulation, it is far from evident at this point how to formulate an adequate democratic response to AI.[70] Further, as Jennifer Raso argues, we would be well advised to approach traditional legal and political theory concepts of agency in a new light and through an engagement with critical data studies and new materialism literature:

This work challenges conventional notions of who, or what, might be responsible for outcomes and who has (or ought to have) agential power. Critical new materialism scholarship, in particular, traces how subjects and objects are enmeshed. Its aims are both illustrative and political: the goal is to show how power functions in the world to make critical political economy analyses (and presumably their transformational outcomes) possible.[71]

The gradual and seemingly irreversible shift to ADM in the aforementioned example in the humanitarian aid space—as well as the other examples in loan distribution, recruitment, or border control—presents formidable normative and ethical challenges for a liberal political theory, for which the (rule of) law is a key component in the organization of daily democratic practice.[72] By significantly extenuating the dynamics of a growing number of sensible transactions being assumed by market actors as already, say, in the areas of electronic, “blockchain” contracting or in the increasing use of AI in commercial arbitration,[73] ADM unleashes its dynamics in highly sensitive areas, ranging from bail and imprisonment conditions to decisions regarding access to mortgages, employment, or health care.[74] A key concern here is the difficulty to create adequate forms of public oversight and democratic control of algorithmic governance processes.[75]

While, as noted earlier, such processes of replacing human choice with robots’ ADM remain the subject of critical investigation, there is another dimension to this shift, which is bound to further undermine basic but essential tenets of representative and accountable political legitimacy. This lies not merely in how AI destabilizes systems of accountability and authority, but is also seen to have become a space where decisions are produced (“output”) while the totality of data it draws on (“input”) is impossible to account for.[76] That space is not obviously congruent with what lawyers call legal jurisdiction. AI’s space-creating ability, in which the gathering (“extraction”), processing, and applying of data is possible without almost any human input fundamentally redraws the confines of the realm in which we once learned to engage the differences between government and governance in the context of increasing delegation of public authority (and legitimacy) to private actors.[77] It remains to be seen which lessons we can draw from that experience as we search for adequate regulatory frameworks for the emerging power infrastructures fuelled by AI.[78]