Résumés
Keywords:
- artificial intelligence,
- AI,
- bioethics,
- responsible,
- trustworthy
Mots-clés :
- intelligence artificielle,
- IA,
- bioéthique,
- responsable,
- fiable
Critical discussions about AI governance are often obscured by the interchangeable use of terms: “ethical AI” (1), “responsible AI” (2), and “trustworthy AI” (3). While related, their distinctions are profound, and we believe the focus must be squarely on ethical AI, as it provides a foundational moral framework that extends beyond the operational scope of “responsible” or “trustworthy” practices. Responsibility and trust are subsets of the overarching ethics. Further, “responsible AI” is too often aligned with legal defensibility, corporate compliance, and accountability (2,3), while “trustworthy AI” — with frameworks like “Z-Inspection” — concentrates on technical reliability and quality control (4). These concepts are necessary but fall short of the bigger picture that is ethical AI. Reducing ethics to a checklist of compliance and functionality sidesteps the deeper and more complex engagement with moral values and societal good. The importance of this distinction becomes clear in high-stakes fields like pharmaceuticals, device, and diagnostics research and development (1). An AI-assisted system can be fully compliant (“responsible”) and technically flawless (“trustworthy”) yet still lead to profoundly inequitable outcomes. Consider an AI tool for clinical trial recruitment that, in optimizing for data completeness, marginalizes underrepresented populations. This is not a technical glitch; it is an ethical failure that perpetuates health inequities. As another example, an AI-assisted system could be created and deployed as a human bioweapon, challenging the principle of ethical use. This highlights a foundational concept: legality is not a substitute for ethics, as there can be things that are legal (and compliant) but not ethical. Technology invariably outpaces regulation, creating a vast space where specific laws do not yet exist for every ethical dilemma. A framework grounded in moral principles is therefore essential to navigate this territory, moving beyond what is merely legal to what is fundamentally normatively right. Ensuring that AI practices are sound requires a proactive approach that places ethics at the forefront of innovation. We must move the conversation beyond the comfortable but insufficient language of responsibility and trustworthiness. Prioritizing ethical AI is a commitment to rigorous moral scrutiny, especially in critical fields like drug development, where the ultimate measure of success is the advancement of human health and equity.
Parties annexes
Bibliography
- 1. Ménard T, Bramstedt KA. Developing a set of AI ethics principles to shape ethical behavior in drug development. Ther Innov Regul Sci. 2025;59(3):399-402.
- 2. Goellner S, Tropmann-Frick M, Brumen B. Responsible artificial intelligence: a structured literature review. 2024;arXiv:2403.06910.
- 3. Bouhouita-Guermech S, Haidar H. Scoping review shows the dynamics and complexities inherent to the notion of “responsibility” in artificial intelligence within the healthcare context. Asian Bioeth Rev. 2024;16(3):315-44.
- 4. Zicari RV, Amann J, Bruneault F et al. How to assess trustworthy AI in practice. 2022;arXiv:2206.09887v2.

