Abstracts
Abstract
Introduction: The availability and use of artificial intelligence (AI) tools is accelerating significantly. As these technologies proliferate, many post-secondary institutions have responded by banning students from using AI tools such as ChatGPT and framing the conversation as breaches of academic integrity.
Background: Despite these institutional responses, many students adopt these tools as part of their learning journey. In health care settings, the adoption of such tools in the context of patient care provision is a reality. Consequently, there is a relevant pedagogical opportunity to examine how such tools inform the experiential learning of nursing students and their future practice.
Methods: To address the dearth of information regarding nursing students’ perceptions of using AI tools, a Canadian university teaching team incorporated ChatGPT into an undergraduate nursing course assignment. A pilot quasi-experimental pre-post-test survey design was employed to examine student perceptions of using ChatGPT. After obtaining institutional ethics approval, a neutral third party collected the anonymous data.
Findings: Pilot study results highlighted significant student concerns regarding the ethics of using AI tools. Additionally, students described such tools as meaningful avenues to support learning access and equity. Finally, students identified a high probability of use of AI tools in their future practice, suggesting that exposure and support during learning can positively influence responses to these tools in practice settings.
Conclusion: The students surveyed are now practising nurses; thus, findings may provide insight into perceptions of new nurses regarding the integration of AI to support competencies required by the nurses of tomorrow.
Keywords:
- nursing students,
- pedagogy,
- ChatGPT,
- AI tools,
- survey
Résumé
Introduction : La disponibilité et l’utilisation d’outils d’intelligence artificielle (IA) connaissent une croissance fulgurante. Face à la prolifération de ces technologies, de nombreux établissements d’enseignement supérieur ont réagi en interdisant aux étudiantes et étudiants d’utiliser des outils d’IA comme ChatGPT, qualifiant ces moyens de violations de l’intégrité académique.
Contexte : Malgré ces réponses institutionnelles, de nombreux étudiantes et étudiants utilisent ces outils dans le cadre de leur parcours d’apprentissage. Dans les milieux de santé, l’utilisation de tels outils en contexte de soins aux patients est une réalité. Il existe donc une occasion pédagogique pertinente d’examiner comment ces outils éclairent l’apprentissage expérientiel des étudiantes et étudiants en sciences infirmières ainsi que leur future pratique.
Méthodes : Pour pallier le manque de connaissances sur la perception des étudiantes et étudiants en sciences infirmières quant à l’utilisation d’outils d’IA, une équipe pédagogique d’une université canadienne a intégré ChatGPT à un cours de premier cycle en sciences infirmières. Une étude pilote utilisant un devis quasi expérimental pré-test et post-test à l’aide d’un sondage a été menée pour examiner la perception étudiante quant à l’utilisation de ChatGPT. Après avoir obtenu l’approbation éthique de l’établissement, un tiers neutre a recueilli les données anonymes.
Résultats : Les résultats de l’étude pilote ont mis en évidence d’importantes préoccupations des étudiantes et étudiants concernant l’éthique de l’utilisation d’outils d’IA. Les étudiantes et étudiants ont également décrit ces outils comme des moyens pertinents de favoriser l’accès à l’apprentissage et l’équité. Enfin, les étudiantes et étudiants ont identifié une forte probabilité d’utiliser des outils d’IA dans leur pratique future, ce qui suggère que l’exposition et le soutien pendant l’apprentissage peuvent influencer positivement les réactions à ces outils dans les contextes professionnels.
Conclusion : Les étudiantes et étudiants qui ont participé au sondage sont désormais des infirmières et infirmiers qui exercent la profession; ainsi, les résultats peuvent donner un aperçu des perceptions des nouvelles infirmières et nouveaux infirmiers concernant l’intégration de l’IA pour soutenir les compétences requises par les infirmières et infirmiers de demain.
Article body
The availability and use of artificial intelligence (AI) tools within higher education, including nursing programs, is accelerating significantly (Foronda & Porter, 2024; Sallam, 2023). One such AI tool is ChatGPT version 3.5, which, when prompted, describes itself as
a language model developed by OpenAI called Generative Pre-trained Transformer-3.5. It is the third iteration of the GPT series, known for its advanced natural language processing capabilities. GPT-3.5 is a state-of-the-art language model that has been pre-trained on a diverse range of internet text and can generate coherent and contextually relevant responses to user inputs.
OpenAI, 2024
This study incorporated GPT-3.5 as it was the most recent free version at the time and was available to students at no cost. As ChatGPT proliferated rapidly, with newer versions emerging continuously, many post-secondary institutions initially responded with bans and framed conversations mainly around academic integrity. Despite these institutional responses, adoption of these new technologies persists.
Background
As generative AI technologies are adopted, educators are grappling with both how post-secondary students use these tools and how to address significant uptake in the context of supporting critical thinking and professional growth. Such considerations are especially important within the profession of nursing as these evolving technologies are currently transforming health care systems, with prospective use cases in areas as diverse as forecasting of public health insights for health promotion opportunities, support for clinical decision-making using electronic health record data, and supply chain management (American Nurses Association, 2022). Thus, nurse educators are tasked with not only critically examining the impact of these tools but also incorporating them into nursing curricula. While there is much speculation about the benefits and concerns of ChatGPT in nursing education, a timely scoping review highlights the mostly editorial nature of this literature (Sallam, 2023) and the need for continuous research.
As outlined in the nursing education literature, ChatGPT challenges include negative effects on critical thinking development, especially evaluating research evidence (e.g., Choi et al., 2023); undue trust in notoriously incomplete and/or inaccurate AI tools (e.g., Flanagin et al., 2023; Shen et al., 2023); the introduction of biased and misleading information to student learning and health care (e.g., O’Connor, 2022); and the exacerbation of inequities for those who do not have access (Cotton et al., 2023). By far, the most severe challenge discussed in nursing education is the threat to academic integrity (Choi et al., 2023; Cotton et al., 2023). Academic integrity can be defined as “compliance with ethical and professional principles, standards, practices and consistent system of values, that serves as guidance for making decisions and taking actions in education, research and scholarship” (Tauginienė et al., 2018 pp. 8–9).
A plethora of opportunities exists alongside these challenges. With thoughtful use, AI tools such as ChatGPT may provide opportunities for tailored, learner-centred content and experiences to focus on complex practices (e.g., Irwin et al., 2023; Lim et al., 2022; Neumann et al., 2023). Such use may also be considered to be “future proofing” nursing education, in that AI tools can support educators to provide a learning environment that prepares students for contemporary practice settings (Irwin et al., 2023, p. 2). This preparation is particularly important as AI tools are currently being integrated into practice decision-making tools and electronic health records with the expectation that nurses will use them appropriately and have the skills to discern when the AI-driven tool is not serving the patient in front of them (Mello & Guha, 2024; Swiecki et al., 2022). In addition, the use of AI tools in nursing education potentially improves students’ digital literacy and allows future nurses to meaningfully contribute to the competent use and critical evaluation of digital technologies integrated into health care (Castonguay et al., 2023; Sun & Hoelscher, 2023). Overall, many scholars concur that AI tools hold the potential to assist students to appreciate the principles of academic integrity while promoting their critical thinking (Choi et al., 2023). Further, nurse educators must also understand the benefits and consequences of AI tools to support future nurses with professional writing and the requisite problem-solving skills (Le Lagadec et al., 2024).
Strategies that support nurse educators to capitalize on AI tools are also outlined in the literature. Underscoring these strategies is the recommendation to approach AI tools with caution while also engaging in professional reflection and ongoing evaluation of current pedagogical practices (Castonguay et al., 2023). The main recommendations for faculty include ensuring guiding principles are integrated into course syllabi (Foronda & Porter, 2024); preparing for inevitable discussions regarding ethics and practice challenges; improving technology proficiency; understanding the boundaries of academic integrity; and integrating multimodal learning opportunities using AI tools (e.g., simulation, reflection, collaboration) to demonstrate how new knowledge is applied (Sun & Hoelscher, 2023).
Numerous authors encourage further empirical research to better understand the impacts of ChatGPT on nursing education (Choi et al., 2023; Sallam, 2023). Some scholars have notably studied the use of ChatGPT to generate PICOT (population, intervention, comparison, outcome, time) questions (Branum & Schiavenato, 2023); insights into ChatGPT-assisted critical reflections after a simulated learning activity (Chan et al., 2023); the incorporation of chatbots in simulated emergency scenarios (Rodriguez-Arrastia et al., 2022); the evaluation of ChatGPT-generated care plans (Dağci et al., 2024); and the use of ChatGPT-generated materials for electronic fetal monitoring (Han et al., 2022). One study examining student experiences of ChatGPT case studies, which included a few nursing students, demonstrated the promise of AI-assisted pedagogical tools; however, these authors also recommend caution and further study of educational interventions involving ChatGPT (Tlili et al., 2023). Two other non-nursing studies considered student perceptions. Engineering students enjoyed using ChatGPT to assist with computer coding (Shoufan, 2023). Medical students reported that AI tools will be a career benefit and an important aspect of health care and emphasized that AI tools can never replace physicians (Buabbas et al., 2023). While the perceptions of medical, engineering, and other students provided insights, nursing students’ perceptions regarding the use of AI tools are missing. To address this dearth of information, we conducted an exploration of student perceptions to examine how they both understand and use AI tools for their academic studies.
Shortly before this study, our institution released preliminary guidelines to assist with navigating generative AI tools. Information at that time was vague and focused on discussing academic integrity (particularly plagiarism) and assignment expectations with students. Despite a lack of clarity, instructors were encouraged to consider how to mitigate classroom effects by providing alternative ways for students to demonstrate critical thinking. Since that time, our institution has developed a robust suite of policy tools, including a position statement (University of Victoria, 2024b), a primer for students (University of Victoria, 2024a), and ongoing opportunities for consultation and coaching to promote the ethical and appropriate use of this shifting and unpredictable technology. Our aim is that the findings presented here will support nurse educators in their creative use of AI tools in contemporary academic environments.
The Pedagogical Intervention
Drawing from the findings of empirical studies discussed earlier, we designed an assignment in which students were required to incorporate GPT-3.5 into the creation of their final product as well as critically appraise the GPT-3.5 responses to their prompts. Overall, the mandatory class assignment was the creation of a briefing note regarding a political issue that was meaningful to the student. Detailed instructions for both GPT-3.5 and the assignment expectations were included in the course syllabus. For the assignment, the students were instructed to use their thesis statement as a GPT-3.5 prompt to generate a briefing note. Then, the students used this generated text as an outline to consider and build upon for the final product. The students then responded to three reflective questions: 1) Does the information in the ChatGPT-generated text make sense? Is it credible? Is anything missing? Comment on the quality of the references provided. 2) Did the ChatGPT briefing note frame the policy issue the same way you used in your final version of your assignment? Did ChatGPT make any recommendations that differed from yours? Did this blending improve your learning or expand your understanding of how to frame a policy issue? 3) What were the challenges and/or benefits of using ChatGPT for this assignment? Lastly, the ChatGPT prompt, the generated text, and the student’s responses to the reflective questions were required as appendices to their assignment.
The purpose of the reflection was to encourage students to think critically about both the use and veracity of AI tools in their writing. The assignment was reviewed in detail during the first class, and time was set aside in subsequent classes to answer student questions. Despite explicit and sanctioned use of AI tools for this assignment, students still expressed significant concern about inadvertent plagiarism. To address this concern, extra sessions with the educators were offered to answer student questions and review any written material they were concerned about.
Aim and Research Questions
The aim of the research was to examine nursing students’ perceptions of using ChatGPT version 3.5 within a course assignment. Considering the limited empirical guidance and the absence of institutional policies at the time this research was conducted, we were not evaluating the effectiveness of ChatGPT as a potential pedagogical tool in and of itself. Thus, our overarching research question was the following: How do nursing students perceive the use of ChatGPT before and after completing an assignment with a GPT-3.5 component? As well, we aimed to explore nursing students’ perceptions of GPT-3.5 and similar technologies in relation to their future practice.
Additionally, the aim of the primary researcher (an assistant professor at the time of the study) was to include a doctoral student as well as a practising nurse seeking to return to graduate studies in pedagogical inquiry. We were intrigued by the influence of AI and machine learning tools on nursing education and practice and sought to explore how to better support nursing students as the disruptive force of this technology progresses.
Methods
A pilot study with quasi-experimental pre- and post-test survey design was used to explore student perceptions of the explicit incorporation of ChatGPT version 3.5 into an assignment. The survey was designed by the research team and consisted of three parts: demographic details, 17 dichotomous/Likert-scale questions, and three open-ended questions informed by current literature (see Appendix A). Identical surveys, including the open-ended questions, were offered at the beginning of the course prior to discussions of the assignment and at the end of the semester after submission of the assignment for grading.
Participants
All fourth-year nursing students (n = 152) enrolled in a Western Canadian university for the 2023 fall semester were invited to participate in the study. Convenience sampling supported maximum participation during the data collection period. These participants were targeted as they were the only students in the university enrolled in a nursing course that incorporated GPT-3.5 into an assignment.
Recruitment
A recruitment poster approved by the Health Research Ethics Board (HREB) was shared with eligible students through the course’s online learning platform and was not sent to any student individually. The poster contained a link to the anonymous survey and was shared at the beginning of each data collection period; thus, students were free to access the survey at any time in that month outside of classroom time. The recruitment poster outlined that participation was optional and that there were no consequences to students for not participating. The recruitment poster contained the contact information of the neutral third-party researcher for any questions or clarification. The surveys were not discussed in class, and students were made aware that the surveys were not required for any aspect of the course content, assessments, or grading.
Data Collection Procedures
The authors contacted all instructors for the fall 2023 offering of the course in August and obtained permission for the neutral third-party researcher to share recruitment materials with students in their section. The pre-test survey was open from September 14 to October 13, 2023, and the post-test survey was open December 1 to 24, 2023.
Ethical Considerations
Institutional ethics approval (Ethics Protocol Number 23-0394) was obtained prior to commencing the surveys. As two of the researchers (LN and AW) were actively teaching the students in the same course during the semester in which data collection took place, a researcher (CF) not employed by the institution and with no relation to the students served as a neutral third-party to recruit participants as well as to collect, de-identify, and collate the data via a secure survey platform. The neutral third-party researcher was also available to answer student questions and to reiterate that participation would not influence their academic progress. The students were aware that the two educators were also the researchers and that they would not discuss the surveys with the students.
The surveys were separate from classroom activities and were not part of any content, assignments, or grading for the term. The neutral third-party was the only researcher who had access to the secure survey platform and the raw data. They opened and closed both surveys as well as shared the de-identified data with the educator–researchers after final grades were submitted. The students were made aware that no data sharing would occur until after the final grades were submitted. This strategy was employed to mitigate student perceptions that the surveys were connected with their academic evaluation. Students who chose not to participate faced no consequences, and the educator–researchers were never aware of which students did or did not participate.
Data Analysis
We used descriptive statistics to analyze the demographic characteristics of all respondents—specifically frequencies, range, and mean values (see Table 1). In addition to providing foundational clarity for analysis, these descriptive statistics supported qualitative thematic analysis in the participant subjective responses.
Table 1
Demographics of the Student Respondents
To assess for statistical significance in perceptions of ChatGPT before and after completing the assignment, pre- and post-test scores were paired and analyzed using IBM Statistical Product and Service Solutions (SPSS) version 29. Frequencies for nominal and ordinal data were generated, comparing pre- and post-test percentages for each question (see Table 2). Additionally, Chi-square tests, a suitable statistical test for nominal and ordinal data to determine statistical significance, were used to compare pre- and post-test data (p < 0.05) (see Table 3).
Table 2
Pre- and Post-test Quantitative Questions (Yes/No)
Table 3
Pre- and Post-test Likert-Scale Questions
To examine the responses to the open-ended questions, we employed thematic analysis to look for patterns of meaning (themes) within the subjective text provided by the participants (Braun & Clarke, 2021). The analyzed open-ended questions included: You have indicated that ChatGPT has helped you overcome language or learning disability barriers; can you please explain? Are there any other challenges that you find ChatGPT helps you with? Is there anything else you would like to add? The researchers followed Braun and Clarke’s (2021) six phases and convened after independently reviewing the first 10 responses to ensure analysis was congruent between the researchers. As we became familiar with the data and generated initial codes, we looked for themes together. Finally, we agreed that the themes outlined in this report were an accurate reflection of our interpretation of the participants’ responses. We used direct quotes to illustrate themes. Due to the small number of paired responses (that is, responses from participants who completed both the baseline and the end-of-semester surveys), we used all responses, not exclusively the paired responses, for thematic analysis. We analyzed the baseline and end-of-semester surveys separately to explore any differences over the semester. Further, we viewed this initial process as “a starting point for [a] journey, not a map,” to understanding how students may perceive the use of ChatGPT in a written assignment (Braun et al., 2019, p. 424).
Results
Demographics
In total, 27 responses were paired, and we used these responses for quantitative analysis to answer the main research question: How do nursing students perceive the use of ChatGPT before and after completing an assignment with a GPT-3.5 component? The demographics of this group constitute the core data that could be used to answer the original research data. Ages ranged from 20 to 45. This group self-identified as 44.5% European, 3.7% Indigenous, 3.7% African, 18.5% Asian, and 7.4% Middle Eastern, with 22.2% choosing the “other” category. A total of 96.3% identified as women, and 3.7% identified as men. Demographics for the overall pre-test survey (n = 82) and post-test survey (n = 38) responses were similar (see Table 1). We include all demographic data (the paired responses, the pre-test responses, and the post-test responses) as we make further comparisons to help us understand the data.
Survey Findings: Quantifiable Responses
Analysis of the 27 paired responses finds statistical significance for only one question: Have you used AI/machine learning tools such as ChatGPT in assignments before? During data collection, a surge in the broader university discussion context contributed to some confusion as students received multiple questionnaires or notices from various university entities; however, meaningful information can be gleaned from the overall data set. Most notably, almost half of overall respondents (both pre- and post-test) reported that they believed, to some extent, that using ChatGPT for assignments is not ethical despite significant class discussion and opportunities to ask questions. A total of 67% of all students reported that they had not used ChatGPT for academic assignments prior to the current semester (see Table 2). Many students agreed that ChatGPT helped them overcome language or learning disability barriers (29% paired; 37% overall). At the end of the semester, approximately half agreed that ChatGPT will improve nursing education (59% paired; 58% overall), though fewer agreed that ChatGPT will prepare them for nursing practice (37% paired; 50% overall). As well, 77% of the paired responses (66.3% overall) believed that some aspect of nursing will be replaced by AI tools in their lifetime, with 77% (76% overall) also agreeing that AI tools will become part of their nursing practice (see Table 3).
Survey Findings: Narrative Response Themes
Analysis of the baseline survey (n = 82) revealed a surprising number of students who had never used ChatGPT (67.1%). Those with ChatGPT experience described employing it for “translating,” “using it like a thesaurus,” or using it to “proofread for spelling mistakes” as common applications. Many students expressed skepticism of ChatGPT, stating variations of the idea that “students should be able to write a paper independently.” In addition, many reported that ChatGPT “provides answers that are not applicable” and “references that are not real.” Students expressed much concern of being accused of plagiarism both in the survey and during class time, even though ChatGPT guidance was explicitly included with the assignment description in the course syllabus. More than half of the pre-test respondents left the open-ended questions blank.
While fewer students responded to the end-of-semester survey (n = 38), the responses were more robust. Thematic analysis highlighted three nested themes: “still skeptical,” “just tools,” and “an equalizer.”
“Still Skeptical”
As aligned with quantitative findings, many students still expressed skepticism even after using ChatGPT in an assignment. The major concern continued to be the fear of unintentional plagiarism and/or being accused of plagiarism or cheating. As one student commented, “I do not think we should be incorporating AI into assignments as I believe it dampens creativity and increases the risk of cheating.” Students commented on wanting further direction regarding the implications, explaining “it is in the nursing school’s … whole best interest to learn how to work with this technology, assess students differently and teach students to use a useful resource that is not going anywhere.” One student expressed concern regarding human reliance on such AI tools for “ideas and brainstorming” and the impact in practice environments if future colleagues are “not prepared for complex medical situations.”
“Just Tools”
After completing the assignment, students described ChatGPT and related technologies as “just tools” perceived to meet a wide range of student needs, such as to assist them when they were “overwhelmed by too much information” and needed “a place to start,” to expand thinking to otherwise “unknown” perspectives, and to allow them to see what “is possible” when considering their topic situated within nursing practice. In addition to assistance with basic grammar, spelling, and writing composition, students noted that ChatGPT gave them “feedback on [their] writing.” As well, students reported ChatGPT was “helpful” for “simplifying academic language and medical jargon.” Some students indicated that ChatGPT provided them with an avenue for “conversation” to explore topics, thus supporting their understanding of “complex” and/or “long readings,” which may be particularly meaningful in a cohort who experienced disruptions to their education related to the COVID-19 pandemic. As one student summarized, “ChatGPT has helped me exponentially expand my critical thinking.”
“An Equalizer”
An unexpected number of responses described ChatGPT as “an equalizer.” Students reported that ChatGPT supported their English skills and helped them overcome disability-related barriers that had historically affected their academic success. As one student described, “English is my second language so using [ChatGPT] to help me explain and explore my ideas really helps.” Another student concurred, stating, “expressing myself [in English] properly in essays has always been difficult for me to do.” Students who stated that they had learning disabilities described the perceived benefits of using ChatGPT in conjunction with their academic work. One student stated, “It helped me feel more free and less bound by my disability to learn, communicate … and engage with the world around me.” Students who identified as having attention deficit hyperactivity disorder (ADHD) added that ChatGPT helped with “assignments that [were] hard to start due to ADHD,” helped them “figure out an outline,” and pointed them “in the direction of good sources of information.” Students who stated that they had anxiety reported that using ChatGPT “help[ed them] start” when they were “too overwhelmed to think” and stopped “the circles of overcomplicating [themselves].”
Discussion
Contrary to debates in the higher education literature and media, our study finds a significant number of respondents reported that they did not engage in unauthorized use of ChatGPT for academic work. Further, the students appeared as skeptical of and concerned about academic integrity and cheating as educators. Students are struggling, just as nurse educators are, to figure out how such technologies fit within their learning and future practice. In addition to the importance of providing clear guidance in syllabi and classroom discussions, our findings point to students’ desire to understand ethical use of AI tools with a concurrent need for mentorship and guidance to navigate uncharted territory as AI tools become an inextricable aspect of contemporary education. Student perspectives also need to be honoured and included in policy work currently under way in most academic environments. Unexamined assumptions about student experiences and opinions in relation to AI tools may both underestimate the integrity of student populations and overly limit necessary safe opportunities to explore the parameters of using AI tools that will be embedded in the professional practice environments they enter upon graduation.
The detailed descriptions of AI tools as an “equalizer,” especially considering significant educator efforts, highlight how students continue to experience barriers and inequities in their education. These students described ChatGPT as an enabler to overcome challenges and contribute to their academic success. AI tools might provide an opportunity to learn without being exhausted from the mental power required to regularly deal with barriers. Such inclusive pedagogy is an important aspect of Universal Design for Learning (UDL) (Fornauf & Erickson, 2019). UDL is an evidence-informed approach to education that aims to remove barriers and inequities to learning while supporting students to meaningfully access knowledge to enhance their academic experiences (Davies et al., 2013). In this way, AI tools can be harnessed to elevate educational experiences to be truly inclusive and accessible with as much focus on using such tools for pedagogical good as on plagiarism and cheating (Kumar, 2023).
The integration of ChatGPT and related AI tools into both academic settings and practice environments seems inevitable (Castonguay et al., 2023; Oermann, 2024). Our findings demonstrate that such tools may support students with writing and critical thinking skills; however, demystifying these tools with clear expectations is imperative. As technology and new knowledge changes rapidly, expectations cannot be static. Instead, ongoing conversation about ethical and acceptable use for nursing students and nurse educators is crucial. As these technologies evolve and change, so, too, will nursing, across the career trajectory.
Limitations
Limitations include the need for a convenience sample; findings that were based on self-reported data in which students could perceive power imbalances; the surveys’ being created by the authors and not validated; and the influence of unanticipated multiple concurrent institutional surveys of the same topic. Finally, the frequent use of single-group pre- and post-test design projects for nursing education is a limitation in itself (Spurlock, 2018). Thus, we see this pilot study as a map for future research and do not make any claims of causation. Our original purpose was to create space to talk about AI tools at a time when universities and colleagues were banning such technologies. In less than a year, these bans have been reconsidered, highlighting the temporal limitations of reporting in a climate of intense and rapid change.
Pedagogical Implications
This pilot study offers an opportunity to explore how nursing students perceive AI tools such as ChatGPT in relation to their learning and highlights the profound influence these tools will have on the learning processes of both nursing students and educators. This study provides some insights into how nursing students’ perceptions can support educators to design assignments that balance the use of AI tools with critical thinking skills. Future research aimed to inform best practices for using AI tools in nursing education without compromising academic integrity is greatly needed. We must move towards pedagogical methods that encourage creative and independent thinking while upholding principles of academic integrity.
In addition, better understanding the potential of AI tools to support students in the post-traditional learning environments of today’s classrooms will be essential as we strive for equity. We must be especially mindful that such technologies are poised to be an integral part of health care, as they are increasingly incorporated into practice supports such as decision-making tools. Nursing students’ understandings of how AI tools fit into their education will inform how they practise in digitally enhanced work environments. Evaluating and investigating the long-term impacts of AI tools on learning outcomes, professional development, and patient outcomes begin in the classroom. Nurse educators are at the forefront of this unpredictable workplace transition that is amplifying the digital integration into nursing practice, and generating evidence to support this transition is imperative.
Conclusion
Our pilot study serves as a starting point to understand how we might engage nursing students in a pedagogically sound manner to support their journey to becoming critically thinking professionals in a way that recognizes and works with the changing nature of technologies rather than resists it. As outlined in the Canadian Nurses Association and the Canadian Nursing Informatics Association’s (2024) position statement Nursing Practice in Digitally Enabled Care Environments, these emerging technologies are reshaping the health care landscape. Nurse educators will also need to transform the classroom. The integration of AI tools into undergraduate nursing education can support the competencies required for the nurses of tomorrow. We must continue to explore how ever-evolving generative AI technologies will inevitably change us, as well.
Appendices
Appendix
Appendix A. Survey Questions
Please provide the last 4 digits of your phone number so your pre- and post-test surveys can be linked. Once your surveys are linked, this information will be deleted by the neutral third-party researcher. (Identifier to match pre-post-test surveys)
How old are you? (actual number)
-
What gender do you identify with?
Man
Woman
Prefer not to disclose
Prefer to self-identify (textbox)
-
What is your cultural background? Choose all that apply.
African
European
East Asian
South Asian
South East Asian
First Nations or Indigenous
Hispanic or Latinx
Middle Eastern
Prefer not to answer
Other
Are you aware of the institutional Use of artificial intelligence tools and implications for Academic Integrity policy? (Yes/No)
Do you think using ChatGPT for assignments is ethical? (Yes/No)
Should ChatGPT be banned? (Yes/No)
Is ChatGPT useful as a sort of search engine to generate and/or discuss ideas? (Yes/No)
Have you used artificial intelligence/machine learning tools such as ChatGPT in assignments before? (Yes/No)
If so, do you think the instructor was aware? (Yes/No)
ChatGPT helps me overcome language or learning disability barriers (Yes/No)
You have indicated that ChatGPT has helped you overcome language or learning disability barriers; please explain in the text box below. (textbox)
Using the scale below, how much do you agree with the following statements: Strongly disagree, Disagree, Somewhat disagree, Somewhat agree, Agree, Strongly agree.
ChatGPT is an accurate learning tool.
ChatGPT provides more opportunities for student cheating.
ChatGPT saves me time when I am overwhelmed.
ChatGPT helps me start a paper with a thesis when I am stuck.
ChatGPT helps me write better.
ChatGPT will improve my nursing education.
Learning to use ChatGPT in school will prepare me for nursing practice.
ChatGPT helps me overcome language or learning disability barriers.
Artificial intelligence/machine learning tools will replace some aspects of nursing in my lifetime.
Artificial intelligence/machine learning and similar tools will become a part of nursing practice.
Are there any other challenges that you find ChatGPT helps you with? (text box)
Is there anything else you would like to add? (text box)
Bibliography
- American Nurses Association. (2022). The ethical use of artificial intelligence in nursing practice. https://www.nursingworld.org/globalassets/practiceandpolicy/nursing-excellence/ana-position-statements/the-ethical-use-of-artificial-intelligence-in-nursing-practice_bod-approved-12_20_22.pdf
- Branum, C., & Schiavenato, M. (2023). Can ChatGPT accurately answer a PICOT question? Assessing AI response to a clinical question. Nurse Educator, 48(5), 231–233. https://doi.org/10.1097/NNE.0000000000001436
- Buabbas, A. J., Miskin, B., Alnaqi, A. A., Ayed, A. K., Shehab, A. A., Syed-Abdul, S., & Uddin, M. (2023). Investigating students’ perceptions towards artificial intelligence in medical education. Healthcare, 11(9), 1298. https://doi.org/10.3390/healthcare11091298
- Braun, V., & Clarke, V. (2021). Thematic analysis: A practical guide. SAGE Publications.
- Braun, V., Clarke, V., & Hayfield, N. (2019). “A starting point for your journey, not a map”: Nikki Hayfield in conversation with Virginia Braun and Victoria Clarke about thematic analysis. Qualitative Research in Psychology, 19(2), 424–445. https://doi.org/10.1080/14780887.2019.1670765
- Canadian Nurses Association & Canadian Nursing Informatics Association. (November 2024). Nursing practice in digitally enabled care environments. https://www.cna-aiic.ca/en/policy-advocacy/policy-support-tools/position-statements
- Castonguay, A., Farthing, P., Davies, S., Vogelsang, L., Kleib, M., Risling, T., & Green, N. (2023). Revolutionizing nursing education through AI integration: A reflection on the disruptive impact of ChatGPT. Nurse Education Today, 129, 105916. https://doi.org/10.1016/j.nedt.2023.105916
- Chan, M. M. K., Wong, I. S. F., Yau, S. Y., & Lam, V. S. F. (2023). Critical reflection on using ChatGPT in student learning: Benefits or potential risks? Nurse Educator, 48(6), E200–E201. https://doi.org/10.1097/NNE.0000000000001476
- Choi, E. P., Lee, J. J., Ho, M.-H., Kwok, J. Y. Y., & Lok, K. Y. (2023). Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education. Nurse Education Today, 125, 105796. https://doi.org/10.1016/j.nedt.2023.105796
- Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. https://doi.org/10.1080/14703297.2023.2190148
- Dağci, M., Çam, F., & Dost, A. (2024). Reliability and quality of the nursing care planning texts generated by ChatGPT. Nurse Educator, 49(3), E109–E114. https://doi.org/10.1097/NNE.0000000000001566
- Davies, P. L., Schelly, C. L., & Spooner, C. L. (2013). Measuring the effectiveness of universal design for learning intervention in postsecondary education. Journal of Postsecondary Education and Disability, 26(3), 195–220.
- Flanagin, A., Kendall-Taylor, J., & Bibbins-Domingo, K. (2023). Guidance for authors, peer reviewers, and editors on use of AI, language models, and chatbots. JAMA, 330(8), 702–703. https://doi.org/10.1001/jama.2023.12500
- Foronda, C., & Porter, A. (2024). Strategies to incorporate artificial intelligence in nursing education. Nurse Educator, 49(3), 173–174. https://doi.org/10.1097/NNE.0000000000001584
- Fornauf, B. S., & Erickson, J. D. (2019). Toward an inclusive pedagogy through universal design for learning in higher education: A review of the literature. Journal of Postsecondary Education and Disability, 33(2), 183–199.
- Himsworth, C., Byers, K., & Gardy, J. (2021). The mission, the message, and the medium. Pressbooks. https://pressbooks.bccampus.ca/missionmessagemedium/
- Han, J.-W., Park, J., & Lee, H. (2022). Analysis of the effect of an artificial intelligence chatbot educational program on non-face-to-face classes: A quasi-experimental study. BMC Medical Education, 22, Article 830. https://doi.org/10.1186/s12909-022-03898-3
- Irwin, P., Jones, D., & Fealy, S. (2023). What is ChatGPT and what do we do with it? Implications of the age of AI for nursing and midwifery practice and education. Nurse Education Today, 127, 105835. https://doi.org/10.1016/j.nedt.2023.105835
- Kumar, A. H. S. (2023). Analysis of ChatGPT tool to assess the potential of its utility for academic writing in biomedical domain. Biology, Engineering, Medicine and Science Reports, 9(1), 24–30. https://doi.org/10.5530/bems.9.1.5
- Le Lagadec, D., Jackson, D., & Cleary, M. (2024). Artificial intelligence in nursing education: Prospects and pitfalls. Journal of Advanced Nursing, 80(10), 3883–3885. https://doi.org/10.1111/jan.16276
- Lim, C.-P., Chen, Y.-W., Vaidya, A., Mahorkar, C. & Jain, L. C. (Eds.). (2022). Handbook of artificial intelligence in healthcare – vol 2: Practicalities and prospects. Springer Nature.
- Mello, M. M., & Guha, N. (2024). Understanding liability risk from using health care artificial intelligence tools. The New England Journal of Medicine, 390(3), 271–278. https://doi.org/10.1056/NEJMhle2308901
- Neumann, M., Rauschenberger, M., & Schön, E.-M. (2023, May 16). “We need to talk about ChatGPT”: The future of AI and higher education [Conference paper]. 2023 IEEE/ACM 5th International Workshop on Software Engineering Education for the Next Generation (SEENG), Melbourne, Australia. https://doi.org/10.1109/SEENG59157.2023.00010
- Oermann, M. H. (2024). Using AI to write scholarly articles in nursing. Nurse Educator, 49(1), 52. https://doi.org/10.1097/NNE.0000000000001577
- OpenAI. (2024). ChatGPT (April 20 version) [Large language model]. https://chat.openai.com
- Rodriguez-Arrastia, M., Martinez-Ortigosa, A., Ruiz-Gonzalez, C., Ropero-Padilla, C., Roman, P., & Sanchez-Labraca, N. (2022). Experiences and perceptions of final-year nursing students of using a chatbot in a simulated emergency situation: A qualitative study. Journal of Nursing Management, 30(8), 3874–3884. https://doi.org/10.1111/jonm.13630
- Sallam, M. (2023). ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare, 11(6), 887. https://doi.org/10.3390/healthcare11060887
- Shen, Y., Heacock, L., Elias, J., Hentel, K., Reig, B., Shih, G., & Moy, L. (2023). ChatGPT and other large language models are double-edged swords. Radiology, 307(2). https://doi.org/10.1148/radiol.230163
- Shoufan, A. (2023). Exploring students’ perceptions of ChatGPT: Thematic analysis and follow-up survey. IEEE Access, 11, 38805–38818. https://doi.org/10.1109/access.2023.3268224
- Spurlock, D. R., Jr. (2018). The single-group, pre- and posttest design in nursing education research: It’s time to move on. Journal of Nursing Education, 57(2), 69–71. https://doi.org/10.3928/01484834-20180123-02
- Sun, G. H., & Hoelscher, S. H. (2023). The ChatGPT storm and what faculty can do. Nurse Educator, 48(3), 119–124. https://doi.org/10.1097/NNE.0000000000001390
- Swiecki, Z., Khosravi, H., Chen, G., Martinez-Maldonado, R., Lodge, J. M., Milligan, S., Selwyn, N., & Gašević, D. (2022). Assessment in the age of artificial intelligence. Computers and Education: Artificial Intelligence, 3, 100075. https://doi.org/10.1016/j.caeai.2022.100075
- Tauginienė, L., Gaižauskaitė, I., Glendinning, I., Kravjar, J., Ojsteršek, M., Ribeiro, L., Odiņeca, T., Marino, F., Cosentino, M., Sivasubramaniam, S., & Foltýnek, T. (2018). Glossary for academic integrity: Report (revised version). European Network for Academic Integrity. https://www.academicintegrity.eu/wp/wp-content/uploads/2023/02/EN-Glossary_revised_final_24.02.23.pdf
- Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10, Article 15. https://doi.org/10.1186/s40561-023-00237-x
- University of Victoria. (2024a). Read this before you use ChatGPT for your assignments. https://onlineacademiccommunity.uvic.ca/uviclearn/technology/read-this-before-you-use-chatgpt-for-your-assignments/
- University of Victoria. (2024b, November 22). UVic’s generative artificial intelligence (GenAI) position statement. https://teachanywhere.uvic.ca/top-post/genai-position-statement/
List of tables
Table 1
Demographics of the Student Respondents
Table 2
Pre- and Post-test Quantitative Questions (Yes/No)
Table 3
Pre- and Post-test Likert-Scale Questions






