Some features and content are currently unavailable today due to maintenance at our service provider. Status updates

Lettre à l’éditeurLetter to the Editor

Ethical AI: More Than Just Responsible or Trustworthy[Record]

  • Timothé Ménard and
  • Katrina A. Bramstedt

…more information

  • Timothé Ménard
    F. Hoffmann-La Roche AG - CH-4070, Basel, Switzerland
    timothe.menard@roche.com

  • Katrina A. Bramstedt
    F. Hoffmann-La Roche AG - CH-4070, Basel, Switzerland
    Queensland University of Technology School of Medicine, Brisbane, Australia

Reçu / Received

16/07/2025

Conflits d’intérêts / Conflicts of Interest

Au moment où cette lettre a été rédigé, Timothé Ménard et Katrina Bramstedt étaient employés par Roche et détenaient des actions de F. Hoffmann-La Roche Ag. / At the time this letter was written, Timothé Ménard and Katrina Bramstedt were employed by Roche and owned F. Hoffmann-La Roche Ag stock.

Édition / Editors

Hazar Haidar & Aliya Affdal

Critical discussions about AI governance are often obscured by the interchangeable use of terms: “ethical AI” (1), “responsible AI” (2), and “trustworthy AI” (3). While related, their distinctions are profound, and we believe the focus must be squarely on ethical AI, as it provides a foundational moral framework that extends beyond the operational scope of “responsible” or “trustworthy” practices. Responsibility and trust are subsets of the overarching ethics. Further, “responsible AI” is too often aligned with legal defensibility, corporate compliance, and accountability (2,3), while “trustworthy AI” — with frameworks like “Z-Inspection” — concentrates on technical reliability and quality control (4). These concepts are necessary but fall short of the bigger picture that is ethical AI. Reducing ethics to a checklist of compliance and functionality sidesteps the deeper and more complex engagement with moral values and societal good. The importance of this distinction becomes clear in high-stakes fields like pharmaceuticals, device, and diagnostics research and development (1). An AI-assisted system can be fully compliant (“responsible”) and technically flawless (“trustworthy”) yet still lead to profoundly inequitable outcomes. Consider an AI tool for clinical trial recruitment that, in optimizing for data completeness, marginalizes underrepresented populations. This is not a technical glitch; it is an ethical failure that perpetuates health inequities. As another example, an AI-assisted system could be created and deployed as a human bioweapon, challenging the principle of ethical use. This highlights a foundational concept: legality is not a substitute for ethics, as there can be things that are legal (and compliant) but not ethical. Technology invariably outpaces regulation, creating a vast space where specific laws do not yet exist for every ethical dilemma. A framework grounded in moral principles is therefore essential to navigate this territory, moving beyond what is merely legal to what is fundamentally normatively right. Ensuring that AI practices are sound requires a proactive approach that places ethics at the forefront of innovation. We must move the conversation beyond the comfortable but insufficient language of responsibility and trustworthiness. Prioritizing ethical AI is a commitment to rigorous moral scrutiny, especially in critical fields like drug development, where the ultimate measure of success is the advancement of human health and equity.