Abstracts
Résumé
Ce texte explore la notion de singularité technologique, décrite comme un futur hypothétique où un bond soudain dans les capacités de l'intelligence artificielle lui permettrait de surpasser l'intelligence humaine. Trois parties le composent. La première décrit le contexte et la trajectoire historique de la singularité technologique. La deuxième porte un regard critique sur les arguments en faveur de ce phénomène et de sa façade aux allures scientifiques; les contre-arguments présentés sont tirés des principes fondamentaux qui régissent les activités scientifiques. Tout comme de nombreuses allégations pseudoscientifiques, les tenants de la singularité technologique utilisent du langage scientifique afin de donner une apparence de vérité à ce que l’avenir nous réserve. La dernière partie aborde une démarche réflexive sur les intentions des tenants de la singularité technologique et des conséquences de leurs actions. Non seulement un appel à la prudence est lancé quant aux réelles motivations des messies de la singularité technologique, mais la pensée critique en soi est menacée. C’est ainsi que des réflexions sont soulevées afin d’établir un équilibre entre science, travail et société, afin que la technologie serve véritablement l’humanité sans en compromettre les fondements éthiques qui la guident.
Mots-clés :
- Singularité technologique,
- intelligence artificielle,
- intelligence humaine,
- principes scientifiques,
- éthique
Abstract
This text explores the concept of technological singularity, envisioned as a hypothetical future where a sudden leap in artificial intelligence capabilities enables it to surpass human intelligence. Three parts make up this text. The first describes the context and historical trajectory of the technological singularity. The second takes a critical look at the arguments in favor of the technological singularity and its scientifically like façade. It argues that, like many pseudoscientific claims, proponents of the technological singularity often employ scientific language to lend credibility to speculative and unfounded predictions about the future. Counterarguments are rooted in the fundamental principles that guide rigorous scientific inquiry. The last part examines the intentions of the proponents of the technological singularity and the consequences of their actions. It calls for caution regarding the true motives of these so-called messiahs of technological singularity and highlights the risks posed to the survival of critical thinking. Consequently, it raises reflections on establishing a balance between science, work, and society to ensure that technology genuinely serves humanity without compromising the ethical foundations that guide it.
Keywords:
- Technological singularity,
- artificial intelligence,
- human intelligence,
- scientific principles,
- ethics
Appendices
Bibliographie
- AlphaGo. (s. d.). Consulté 13 septembre 2023, à l’adresse https://www.deepmind.com/research/highlighted-research/alphago
- Bell, J. L. (2005). Continuity and Infinitesimals. https://plato.sydney.edu.au/entries/continuity/
- Bouchard, C., et Larivée, S. (2022), De grâce, soyez un peu négatif ! Canadian Psychology / Psychologie canadienne, 63, 32-42. https://doi.org/10.1037/cap0000234
- Bousquet, M., Pin, R., et Sivault, C. (2019). Intelligence artificielle. Le point sur les avancées et dernières découvertes. Dossiers Science Hors-Série, 15-54.
- Brousseau, F. (2016, décembre 19). L’ère du faux. Le Devoir. https://www.ledevoir.com/opinion/chroniques/487422/l-ere-du-faux
- Cashiers | Data USA. (s. d.). Consulté 4 décembre 2024, à l’adresse https://datausa.io/profile/soc/cashiers?occupation-by-industry-metric=yearlyChangeOBI
- Cole, D. (2020). The Chinese Room Argument. Dans E. N. Zalta (Éd.), The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2020/entries/chinese-room/
- Dawkins, R. (2017). Pour en finir avec Dieu. Groupe Robert Laffont.
- DeBrusk, C. (2018). The risk of machine-learning bias (and how to prevent it). MIT Sloan Management Review, 15, 1.
- Diaz, A. (2023, mars 29). Immortality is attainable by 2030 : Google scientist. https://nypost.com/2023/03/29/immortality-is-attainable-by-2030-google-scientist/
- Dickson, B. (2021, mars 15). Why machine learning struggles with causality. TechTalks. https://bdtechtalks.com/2021/03/15/machine-learning-causality/
- Dutton, E., et Lynn, R. (2015). A negative Flynn Effect in France, 1999 to 2008–9, Intelligence, 51, 67-70. https://doi.org/10.1016/j.intell.2015.05.005
- Flynn, J. R., et Shayer, M. (2018). IQ decline and Piaget : Does the rot start at the top? Intelligence, 66, 112-121. https://doi.org/10.1016/j.intell.2017.11.010
- Ganascia, J.-G. (2017). Le mythe de la Singularité—Faut-il craindre l’intelligence artificielle? Seuil.
- Gomez, B. (2021, août 24). Elon Musk warned of a ’Terminator’-like AI apocalypse—Now he’s building a Tesla robot. CNBC. https://www.cnbc.com/2021/08/24/elon-musk-warned-of-ai-apocalypsenow-hes-building-a-tesla-robot.html
- Harari, Y. N. (2017). Reboot for the AI revolution. Nature, 550(7676), 324-327. https://doi.org/10.1038/550324a
- Henrich, J. (2015). The secret of our success : How culture is driving human evolution, domesticating our species, and making us smarter. Princeton University Press. https://doi.org/10.1515/9781400873296
- IBM100—Deep Blue. (2012, mars 7). [CTB14]. IBM Corporation. https://www.ibm.com/history/deep-blue
- Ilardi, S. S., et Feldman, D. (2001). The cognitive neuroscience paradigm : A unifying metatheoretical framework for the science and practice of clinical psychology. Journal of Clinical Psychology, 57(9), 1067-1088. https://doi.org/10.1002/jclp.1124
- Ingle, R., et Shayer, M. (1971). Conceptual Demands in Nuffield 0—Level Chemistry. Education in chemistry, 8, 182-183.
- Inhelder, B., et Piaget, J. (1955). De la logique de l’enfant à la logique de l’adolescent, Presses Universitaires de France.
- Intel. (2023, septembre 18). Moore’s Law. https://www.intel.com/content/www/us/en/newsroom/resources/moores-law.html
- Joy, B. (2000, avril). Why the Future Doesn’t Need Us. Wired. https://www.wired.com/2000/04/joy-2/
- Kasler, J., Zysberg, L., et Gal, R. (2021). Culture, collectivism-individualism and college student plagiarism. Ethics et Behavior, 31(7), 488-497. https://doi.org/10.1080/10508422.2020.1812396
- Khalil, M., et Er, E. (2023). Will ChatGPT get you caught? Rethinking of Plagiarism Detection (arXiv:2302.04335). arXiv. https://doi.org/10.48550/arXiv.2302.04335
- Klatzmann, J. (2007). Attention statistiques ! Découverte.
- Kuhn, T. S. (1972). La structure des révolutions scientifiques. Flammarion.
- Kurzweil, R. (2001). The Law of Accelerating Returns. The Kurzweil Library. https://www.writingsbyraykurzweil.com/the-law-of-accelerating-returns
- Kurzweil, R. (2006). The Singularity is Near : When Humans Transcend Biology. Penguin Books.
- Larivée, S. (2014). Quand le paranormal manipule la science : comment retrouver l’esprit critique. Presses universitaires de Grenoble.
- Larivée, S. (2017). Regards croisés sur l’analphabétisme scientifique et le processus d’évaluation par les pairs. Revue de psychoéducation, 46(1), 1-21. https://doi.org/10.7202/1039679ar
- Learning from Tay’s introduction. (2016, mars 25). The Official Microsoft Blog. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
- Leiserson, C. E., Thompson, N. C., Emer, J. S., Kuszmaul, B. C., Lampson, B. W., Sanchez, D., et Schardl, T. B. (2020). There’s plenty of room at the Top : What will drive computer performance after Moore’s law? Science, 368(6495), eaam9744. https://doi.org/10.1126/science.aam9744
- Lemieux, A. (2012). Post-Formal Thought in Gerontagogy or Beyond Piaget. Journal of Behavioral and Brain Science, 2(3), 399-406. https://doi.org/10.4236/jbbs.2012.23046
- Lerner, R. M. (2002). Concepts and Theories of Human Development. Psychology Press.
- Leswing, K. (2022). Intel says Moore’s Law is still alive and well. Nvidia says it’s ended. CNBC. https://www.cnbc.com/2022/09/27/intel-says-moores-law-is-still-alive-nvidia-says-its-ended.html
- McDonald’s—54 Year Stock Price History | MCD. (s. d.). Consulté 4 décembre 2024, à l’adresse https://www.macrotrends.net/stocks/charts/MCD/mcdonalds/stock-price-history
- McElheran, K., Li, J. F., Brynjolfsson, E., Kroff, Z., Dinlersoz, E., Foster, L., et Zolas, N. (2024). AI adoption in America : Who, what, and where. Journal of Economics et Management Strategy, 33(2), 375-415. https://doi.org/10.1111/jems.12576
- Merton, R. K. (1942). Science and technology in a democratic order. Journal of legal and political sociology, 1(1), 115-126.
- Merton, R. K. (1973). The sociology of science : Theoretical and empirical investigations. University of Chicago press.
- Merton, R. K., et Merton, R. C. (1968). Social theory and social structure. Simon and Schuster.
- Nast, C. (2017, novembre 28). Stephen Hawking : « I fear AI may replace humans altogether ». Wired UK. https://www.wired.co.uk/article/stephen-hawking-interview-alien-life-climate-change-donald-trump
- Neuralink. (2023). Neuralink. https://neuralink.com/
- O’Leary, D. E. (2013). Artificial intelligence and big data. IEEE Intelligent Systems, 28(2), 96-99. https://doi.org/10.1109/MIS.2013.39
- Pietschnig, J., et Voracek, M. (2015). One Century of Global IQ Gains : A Formal Meta-Analysis of the Flynn Effect (1909-2013), Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 10(3), 282-306. https://doi.org/10.1177/1745691615577701
- Popper, K. (1973). La logique de la découverte scientifique. Payot.
- Seewald, A. K. (2022). A criticism of the technological singularity. Dans A. Dingli, A. Pfeiffer, A. Serada, M. Bugeja, et S. Bezzina (Éds.), Disruptive technologies in media, arts and design (p. 91-119). Springer International Publishing. https://doi.org/10.1007/978-3-030-93780-5_8
- Shayer, M., et Ginsburg, D. (2009). Thirty years on—a large anti-Flynn effect? (II): 13-and 14-year-olds. Piagetian tests of formal operations norms 1976-2006/7. British Journal of Educational Psychology, 79(3), 409 418. https://doi.org/10.1348/978185408X383123
- Shayer, M., Ginsburg, D., et Coe, R. (2007). Thirty years on - A large anti-Flynn effect? The Piagetian test Volume et Heaviness norms 1975-2003, The British journal of educational psychology, 77, 25-41. https://doi.org/10.1348/000709906X96987
- Shayer, M., et Williams, J. (1984). Sex differences on Piagetian formal operational tasks : Where they went and how to find them, Dans C. J. Turner et Miles (Éds.), The biology of human intelligence, Nafferton Book.
- Shayer, M., et Wylam, H. (1978). The distribution of piagetian stages of thinking in british middle and secondary school children. ii—14- to 16-year-olds and sex differentials. British Journal of Educational Psychology, 48(1), 62-70. https://doi.org/10.1111/j.20448279.1978.tb02370.x
- Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., et Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), Article 7587. https://doi.org/10.1038/nature16961
- Smith, A. C. T. (2016). Cognitive mechanisms of belief change. Springer.
- StarTalk (Réalisateur). (2022, novembre 22). What The 2030s Will Look Like with Ray Kurzweil [Video recording]. https://www.youtube.com/watch?v=ZJF6GoE-R8s
- Tu, T., Palepu, A., Schaekermann, M., Saab, K., Freyberg, J., Tanno, R., Wang, A., Li, B., Amin, M., Tomasev, N., Azizi, S., Singhal, K., Cheng, Y., Hou, L., Webson, A., Kulkarni, K., Mahdavi, S. S., Semturs, C., Gottweis, J., … Natarajan, V. (2024). Towards Conversational Diagnostic AI (arXiv:2401.05654). arXiv. https://doi.org/10.48550/arXiv.2401.05654
- Turing, A. (1950). Computing Machinery and Intelligence. Mind, 59(256), 433 460.
- Ulam, S. (1958). John von Neumann 1903-1957. Bulletin of the American Mathematical Society, 64, 1-49.
- Vinge, V. (1993). Technological singularity by Vernor Vinge.
- Weindling, P. (2012). Julian Huxley and the Continuity of Eugenics in Twentieth-century Britain. Revue d’histoire europeenne contemporaine, 10(4), 480-499. https://doi.org/10.17104/1611-8944_2012_4
- West, D. M. (2018). The Future of Work : Robots, AI, and Automation. Brookings Institution Press.
- Williams, C. J., Dziurawiec, S., et Heritage, B. (2018). More pain than gain : Effort–reward imbalance, burnout, and withdrawal intentions within a university student population. Journal of Educational Psychology, 110(3), 378-394. https://doi.org/10.1037/edu0000212
- Yong, E. (2019, juillet 22). The human brain project hasn’t lived up to its promise. The Atlantic. https://www.theatlantic.com/science/archive/2019/07/ten-years-human-brain-project-simulation-markram-ted-talk/594493/
- Zhou, C., Li, Q., Li, C., Yu, J., Liu, Y., Wang, G., Zhang, K., Ji, C., Yan, Q., He, L., Peng, H., Li, J., Wu, J., Liu, Z., Xie, P., Xiong, C., Pei, J., Yu, P. S., et Sun, L. (2023). A Comprehensive Survey on Pretrained Foundation Models : A History from BERT to ChatGPT (arXiv:2302.09419). arXiv. https://doi.org/10.48550/arXiv.2302.09419
- Zhou, J., Ke, P., Qiu, X., Huang, M., et Zhang, J. (2023). ChatGPT : Potential, prospects, and limitations. Frontiers of Information Technology et Electronic Engineering. https://doi.org/10.1631/FITEE.2300089

