Abstracts
Résumé
L’intelligence artificielle générative (IAG) se fait de plus en plus présente dans la recherche scientifique, sous la forme de robots conversationnels comme ChatGPT, qui facilitent la découverte, l’analyse et la rédaction. La première partie de l’article passe en revue l’architecture complexe de l’intelligence artificielle, les avantages et les risques socioéthiques associés à son utilisation en milieu universitaire. Une sélection d’assistants virtuels classés selon leur fonction est proposée, ainsi qu’une grille d’évaluation de ces outils. La technologie générative agit comme catalyseur des mutations profondes dans la culture informationnelle, en remettant en question les méthodes de recherche d’information et la propriété intellectuelle sur le contenu algorithmique. La seconde partie est centrée sur la littératie IA (AI literacy) et son intersection avec la littératie informationnelle, ce qui fera ressortir les lacunes de la littérature et le besoin d’études subséquentes. L’intelligence artificielle étant un phénomène cumulatif, fort spécialisé et très controversé, les bibliothèques universitaires ne sont pas encore équipées pour une implantation avisée de cette technologie, d’où l’importance de mettre sur pied un nouveau cadre de compétences et un programme de formation adaptés aux métiers de l’information et de la documentation.
Abstract
Generative artificial intelligence (GenAI) is becoming increasingly present in scientific research, in the form of chatbots like ChatGPT, which facilitate discovery, analysis and writing. The first part of the article reviews the complex architecture of artificial intelligence, and the socio-ethical benefits and risks associated with its use in academia. A selection of virtual assistants classified by function is proposed, along with an evaluation grid for these tools. Generative technology acts as a catalyst for profound changes in information culture, calling into question methods of information retrieval and intellectual property over algorithmic content. The second part focuses on AI literacy and its intersection with information literacy, highlighting gaps in the literature and the need for further study. Artificial intelligence being a cumulative, highly specialized and highly controversial phenomenon, university libraries are not yet equipped to implement this technology wisely, hence the importance of setting up a new competency framework and a training program adapted to the information and documentation professions.
Appendices
Bibliographie
- Alasadi, E. A. et Baiz, C. R. (2023). Generative AI in Education and Research : Opportunities, Concerns, and Solutions. Journal of Chemical Education, 100(8) : 2965‑2971. doi : doi.org/10.1021/acs.jchemed.3c00323
- Ali, R., Tang, O. Y., Connolly, I. D., Fridley, J. S., Shin, J. H. … Asaad, W. F. (2023, 12 avril). Performance of ChatGPT, GPT-4, and Google Bard on a Neurosurgery Oral Boards Preparation Question Bank. medRxiv. doi : doi.org/10.1101/2023.04.06.23288265
- Batista, J., Mesquita, A. et Carnaz, G. (2024). Generative AI and Higher Education : Trends, Challenges, and Future Directions from a Systematic Literature Review. Information, 15(11) : 676. doi : doi.org/10.3390/info15110676
- Bouchard, A. (2025, 22 janvier). Au-delà de ChatGPT : recherche d’informations académiques et intelligence artificielle. Urfist Paris. Repéré à urfist.chartes.psl.eu/ressources/au-dela-de-chatgpt- recherche-d-informations-academiques-et-intelligence-artificielle
- Boyle, C. (2025). ChatGPT, Gemini, & Copilot : Using generative AI as a tool for information literacy instruction. The Reference Librarian, 66(1-2) : 13-29. Repéré à www.tandfonline.com/doi/abs/10.1080/ 02763877.2025.2465416
- Chan, C. K. Y. et Colloton, T. (2024). Generative AI in higher education : the ChatGPT effect. Abingdon : Routledge.
- Chawla, D. S. (2024, 14 mai). New chemistry journal folds after outcry over editor appointments. Chemical & Engineering News. Repéré à cen.acs.org/policy/publishing/New-chemistry-journal-folds-outcry/102/web/2024/05
- Chu-Ke, C. et Dong, Y. (2024). Misinformation and Literacies in the Era of Generative Artificial Intelligence : A Brief Overview and a Call for Future Research. Emerging Media, 2(1) : 70‑85. doi : doi.org/10.1177/27523543241240285
- Conroy, G. (2023). How ChatGPT and other AI tools could disrupt scientific publishing. Nature, 622(7982) : 234‑236. doi : doi.org/10.1038/d41586-023-03144-w
- Cox, A. (2024a). Algorithmic Literacy, AI Literacy and Responsible Generative AI Literacy. Journal of Web Librarianship, 18(3) : 93-110. doi : doi.org/10.1080/19322909.2024.2395341
- Cox, A. (2024b). Developing a library strategic response to Artificial Intelligence. eLucidate, 16(4). doi : doi.org/10.29173/elucidate847
- Dempsey, L. (2023, 12 novembre). Generative AI and libraries : seven contexts. LorcanDempsey.net. Repéré à www.lorcandempsey.net/generative-ai-and-libraries-7-contexts
- Deuff, O. L. et Roumanos, R. (2022). Enjeux définitionnels et scientifiques de la littératie algorithmique : entre mécanologie et rétro-ingénierie documentaire. tic&société, 15(2-3) : 325‑360. doi : doi.org/10.4000/ticetsociete.7105
- Eaton, S. E. (2023). Postplagiarism : transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19 : 23. doi : doi.org/10.1007/s40979-023-00144-1
- Eaton, S. E. (2024). Second handbook of academic integrity (2e éd., vol. 1). Cham : Springer. doi : doi.org/10.1007/978-3-031-54144-5
- Extance, A. (2018). How AI technology can tame the scientific literature. Nature, 561(7722) : 273‑274. doi : doi.org/10.1038/d41586-018-06617-5
- Farrelly, T. et Baker, N. (2023). Generative Artificial Intelligence : Implications and Considerations for Higher Education Practice. Education Sciences, 13(11) : 1109. doi : doi.org/10.3390/educsci13111109
- Fengchun, M. et Wayne, H. (2023). Guidance for generative AI in education and research. Paris : UNESCO. doi : doi.org/10.54675/EWZM9535
- Fengchun, M. et Wayne, H. (2024). Orientations pour l’intelligence artificielle générative dans l’éducation et la recherche. Paris : UNESCO. Repéré à unesdoc.unesco.org/ark :/48223/pf0000389901
- Fowler, G. A. (2023, 1er avril). We tested a new ChatGPT-detector for teachers. It flagged an innocent student. Washington Post. Repéré à www.washingtonpost.com/technology/2023/04/01/chatgpt- cheating-detection-turnitin
- Frau-Meigs, D. (2024). Algorithm Literacy as a Subset of Media and Information Literacy : Competences and Design Considerations. Digital, 4(2) : 512‑528. doi : doi.org/10.3390/digital4020026
- Head, A. J., Fister, B. et MacMillan, M. (2020). Information Literacy in the Age of Algorithms : Student Experiences with News and Information, and the Need for Change. Repéré à eric.ed.gov/ ?id=ED605109
- Heikkilä, M. (2024, 9 octobre). Google DeepMind leaders share Nobel Prize in chemistry for protein prediction AI. MIT Technology Review. Repéré à www.technologyreview.com/2024/10/09/1105335/google-deepmind-wins-joint-nobel-prize-in-chemistry-for-protein-prediction-ai
- Hendrix, J. (2024, 10 juin). AI and Epistemic Risk : A Coming Crisis ? Tech Policy Press. Repéré à techpolicy.press/ai-and-epistemic-risk-a-coming-crisis
- James, A. B. et Filgo Hampton, E. (2023). Where does ChatGPT fit into the Framework for Information Literacy ? The possibilities and problems of AI in library instruction. College & Research Libraries News, 84(9) : 334. doi : doi.org/10.5860/crln.84.9.334
- Jimenez, K. (2023, 12 avril). How AI detection tool spawned a false cheating case at UC Davis. USA Today. Repéré à www.usatoday.com/story/news/education/2023/04/12/how-ai-detection-tool-spawned-false-cheating-case-uc-davis/11600777002/
- Jones, N. (2023). How to stop AI deepfakes from sinking society — and science. Nature, 621(7980) : 676‑679. doi : doi.org/10.1038/d41586-023-02990-y
- Khalifa, M. et Albadawy, M. (2024). Using artificial intelligence in academic writing and research : An essential productivity tool. Computer Methods and Programs in Biomedicine Update, 5 : 100145. doi : doi.org/10.1016/j.cmpbup.2024.100145
- Khan, R., Gupta, N., Sinhababu, A. et Chakravarty, R. (2024). Impact of Conversational and Generative AI Systems on Libraries : A Use Case Large Language Model (LLM). Science & Technology Libraries, 43(4) : 319‑333. doi : doi.org/10.1080/0194262X.2023.2254814
- Legg, M. et McNamara, V. (2024, 12 mars). AI is creating fake legal cases and making its way into real courtrooms, with disastrous results. The Conversation. Repéré à theconversation.com/ai-is-creating-fake-legal-cases-and-making-its-way-into-real-courtrooms-with-disastrous-results-225080
- Lo, L. S. (2023). The CLEAR path : A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4) : 102720. doi : doi.org/10.1016/j.acalib.2023. 102720
- Lo, L. S. (2024). Evaluating AI Literacy in Academic Libraries : A Survey Study with a Focus on U.S. Employees. College & Research Libraries, 85(5) : 635. doi : doi.org/10.5860/crl.85.5.635
- Long, D. et Magerko, B. (2020, 23 avril). What is AI Literacy ? Competencies and Design Considerations. Dans CHI’20 : proceedings of the 2020 CHI Conference on Human Factors in Computing Systems : April 25-30, 2020, Honolulu, HI, USA. New York : Association for Computing Machinery, 1‑16. doi : doi.org/10.1145/ 3313831.3376727
- Madunic´, J. et Sovulj, M. (2024). Application of ChatGPT in Information Literacy Instructional Design. Publications, 12(2) : 11. doi : doi.org/10.3390/publications12020011
- Marche, S. (2022, 6 décembre). The College Essay Is Dead. The Atlantic. Repéré à www.theatlantic.com/technology/archive/2022/12/ chatgpt-ai-writing-college-student-essays/672371
- Moore, D. A. et Healy, P. J. (2008). The trouble with overconfidence. Psychological Review, 115(2) : 502‑517. doi : doi.org/10.1037/0033-295X.115.2.502
- Motoki, F., Pinho Neto, V. et Rodrigues, V. (2024). More human than human : measuring ChatGPT political bias. Public Choice, 198(1) : 3‑23. doi : doi.org/10.1007/s11127-023-01097-2
- Ng, D. T. K., Leung, J. K. L., Chu, S. K. W. et Qiao, M. S. (2021). Conceptualizing AI literacy : An exploratory review. Computers & Education : Artificial Intelligence, 2 : 100041. doi : doi.org/10.1016/j.caeai.2021.100041
- Nickerson, R. S. (1998). Confirmation Bias : A Ubiquitous Phenomenon in Many Guises. Review of General Psychology, 2(2) : 175‑220. doi : doi.org/10.1037/1089-2680.2.2.175
- Rettberg, J. W. (2022, 6 décembre). ChatGPT is multilingual but monocultural, and it’s learning your values. Jill Walker Rettberg. Repéré à jilltxt.net/right-now-chatgpt-is-multilingual-but-monocultural-but-its-learning-your-values/
- Ridley, M. et Pawlick-Potts, D. (2021). Algorithmic Literacy and the Role for Libraries. Information Technology and Libraries, 40(2). doi : doi.org/10.6017/ital.v40i2.12963
- Sengar, S. S., Hasan, A. B., Kumar, S. et Carroll, F. (2024). Generative artificial intelligence : a systematic review and applications. Multimedia Tools and Applications. doi : doi.org/10.1007/s11042-024-20016-1
- Sidra, C. M. (2023, 27 octobre). How to strengthen your metacognitive skills to collaborate effectively with AI. Times Higher Education. Repéré à www.timeshighereducation.com/campus/how- strengthen-your-metacognitive-skills-collaborate-effectively-ai
- Springer Nature (2023, 13 octobre). Springer Nature introduces Curie, its AI-powered scientific writing assistant. Repéré à group. springernature.com/gp/group/media/press-releases/ai-powered- scientific-writing-assitant-launched/26176230
- Springer Nature (2024, 12 juin). Springer Nature unveils two new AI tools to protect research integrity. Repéré à group.springernature.com/gp/group/media/press-releases/new-research-integrity- tools-using-ai/27200740
- Stokel-Walker, C. (2023). ChatGPT listed as author on research papers : many scientists disapprove. Nature, 613(7945) : 620‑621. doi : doi.org/10.1038/d41586-023-00107-z
- Tabib, F. M. et Alrabeei, M. M. (2024). Can Guided ChatGPT Use Enhance Students’ Cognitive and Metacognitive Skills ? Dans Al-Marzouqi, A., Salloum, S. A., Al-Saidat, M., Aburayya, A. et Gupta, B. (dir.), Artificial Intelligence in Education : The Power and Dangers of ChatGPT in the Classroom. Cham : Springer, 143‑154. doi : doi.org/10.1007/978-3-031-52280-2_10
- Tang, B. L. (2024). AIgiarism is plagiarism : artificial intelligence can (be perceived to) plagiarize and can also be plagiarized. Science Editing, 12(1) : 81-83. doi : doi.org/10.6087/kcse.346
- Tang, P. (2024, 28 août). AI textbooks and chatbots are already changing the way students learn. Should they ? CBC News. Repéré à www.cbc.ca/news/canada/ai-textbooks-education-1.7302095
- Toner, H. (2023, 12 mai). What Are Generative AI, Large Language Models, and Foundation Models ? Center for Security and Emerging Technology. Repéré à cset.georgetown.edu/article/what-are- generative-ai-large-language-models-and-foundation-models/
- Tversky, A. et Kahneman, D. (1974). Judgment under Uncertainty : Heuristics and Biases : Biases in judgments reveal some heuristics of thinking under uncertainty. Science, 185(4157) : 1124‑1131. doi : doi.org/10.1126/science.185.4157.1124
- Van Noorden, R. et Perkel, J. M. (2023). AI and science : what 1,600 researchers think. Nature, 621(7980) : 672‑675. doi : doi.org/10.1038/d41586-023-02980-0
- Waltman, L., van Eck, N. J. et Noyons, E. C. M. (2010). A unified approach to mapping and clustering of bibliometric networks. Journal of Informetrics, 4(4) : 629‑635. doi : doi.org/10.1016/j.joi.2010.07.002
- Xu, Y., Liu, X., Cao, X., Huang, C., Liu, E. … Zhang, J. (2021). Artificial intelligence : A powerful paradigm for scientific research. The Innovation, 2(4) : 100179. doi : doi.org/10.1016/j.xinn.2021.100179
- Zhao, X., Cox, A. et Cai, L. (2024). ChatGPT and the digitisation of writing. Humanities and Social Sciences Communications, 11 : 482. doi : doi.org/10.1057/s41599-024-02904-x
