AI is Here!
Integrating artificial intelligence (AI) into healthcare education is transforming conventional approaches to professional development. With its ability to enhance educational delivery and clinical decision-making, AI is poised to address the growing complexities of patient care and the dynamic healthcare landscape (Shepherd, 2023; Shepherd & Griesheimer, 2024).
This guide demonstrates how educators and L&D teams can safely and effectively utilise AI while emphasising the critical need to balance innovation with ethical responsibility.
What is AI?
Defined as "the simulation of human intelligence processes by machines, such as computer systems" (Coursera Staff, 2024a), AI encompasses tools like generative AI (GenAI), which can create images, text, videos, and other media based on prompts. These tools are trained on vast repositories of existing online documents and artifacts (Coursera Staff, 2024b).
Examples of specific AI tools include:
- ChatGPT: A large language model capable of generating text, answering questions, and assisting with content creation.
- DALL·E: A generative AI tool for creating images from text prompts.
- Grammarly: An AI-powered writing assistant for grammar, clarity, and style improvements.
- Canva AI: A design tool integrating AI to generate text-to-image designs and visual content.
- Synthesia: A platform for creating AI-generated videos with lifelike avatars.
- H5P: An interactive content creation platform for educators, enhanced with AI capabilities.
Harnessing AI for Productivity and Innovation
Healthcare educators increasingly use GenAI tools to create personalised educational activities, enhance content delivery, and analyse evaluation data. By customising learning experiences, these tools can support education and L&D teams to achieve crucial competencies focused on the demands of modern practice (Hulick, 2023).
GenAI assists in improving productivity and efficiency, which is particularly valuable during periods of workforce pressure when education resources and budgets may face significant constraints.
These tools help streamline processes, reduce administrative burdens, and maximise the impact of limited resources, enabling L&D teams to continue planning, implementing and evaluating high-quality education and training despite challenging circumstances and more significant compliance requirements.
Benefits, Limitations, and Ethical Considerations
Healthcare educators adopting AI must understand its benefits, limitations, and ethical considerations.
Benefits
Benefits of using GenAI in L&D teams include:
- Efficiency - Gen AI can automate administrative tasks, saving educators time and increasing productivity.
- Data insights: Gen AI can support L&D teams in planning and evaluating training effectiveness by identifying and validating new needs and knowledge gaps.
- Content integrity - Tools like Grammarly can identify and remove inaccuracies in training materials, detect plagiarism, and determine the amount of AI used in the content. This assists teams in maintaining content integrity and accuracy and ensuring materials are evidence-based.
- Engagement - Certain tools may help create interactive learning experiences such as images, videos, and quizzes more quickly and at a lower cost.
- Rapid content development: Certain tools like ChatGPT, due to their efficiency gains, may speed up the creation of learning materials, allowing educators to develop modules tailored to emerging needs or regulatory changes quickly.
Using AI to Improve Evaluation
A key benefit we want to highlight is the ability to use AI to analyse qualitative and quantitative data from educational activities and create summative evaluations.
Evaluation is a key aspect of the best-practice cycle for developing educational initiatives. Yet, it is often an area that educators may dedicate little time to, lack confidence in, or possess limited experience in. Yet, evaluating the impact of an educational activity is critical to demonstrating the effectiveness of the education and the value that an L&D team brings to the organisation.
By inputting free-text comments into AI tools with clear instructions, educators can identify trends and areas for improvement. However, human oversight remains crucial to validate the analysis and address any inaccuracies (Prescott et al., 2024).
These benefits, particularly with respect to evaluation, demonstrate how GenAI can assist educators and L&D teams in creating innovative, efficient, and accurate learning experiences.
Limitations
Despite the clear benefits, there are, of course, limitations to using GenAI in education. Key limitations include:
- Biases in AI models - AI tools can reflect biases in their training model data, leading to skewed or inaccurate outputs.
- Limited context - Tools can misinterpret prompts or generate content that lacks real-world relevance without human oversight.
- Dependence on data quality - AI's effectiveness depends on the quality, accuracy, and relevance of the data it has been trained on, which may sometimes be outdated or incomplete.
- Impact on creativity - If used excessively, Dependency on AI tools might weaken critical thinking and creativity among educators (and learners).
- Privacy concerns - Data security, learner privacy, and transparency issues are key limitations if Gen AI tools are not used safely and ethically.
Ethical Considerations
While GenAI offers significant potential, its use in education requires careful attention to ethical considerations. Key ethical considerations include:
- Data privacy and security - Using Gen AI to process sensitive learner or organisational data may breach organisational, state or federal privacy regulations
- Bias and perpetuation of stereotypes - Preventing AI-generated content from perpetuating biases or excluding certain groups due to biased training datasets is critical for ethical use.
- Transparency - Disclosing when AI tools are used in creating or delivering educational materials to maintain trust and accountability.
- Human oversight - Solid content creation processes can avoid overreliance on AI by ensuring a human (L&D professional or educator) is responsible for assessing accuracy, potential bias, content integrity and relevance.
Forming AI Literacy and Ethical Use
To integrate AI effectively, healthcare educators must practice intentional learning by identifying their own gaps in AI knowledge and actively addressing ethical, legal, and practical challenges (Karatas et al., 2024). Educators must be able to clearly explain AI's applications and limitations, use it responsibly, and foster critical thinking skills among AI users.
These efforts ensure that AI enhances, rather than replaces, clinical judgment within an L&D and education team.
While educators may become champions for AI use in their organisations, they must remain attentive to the potential for misuse or over-reliance. Understanding how to use AI tools appropriately is essential to maximising their benefits without compromising human expertise.
Using AI in Healthcare Education
Effective, prompt engineering and the use of robust frameworks are practical ways to maintain educational integrity and overcome the limitations of using GenAI tools.
1. Effective Prompt Engineering
Large language models, such as AI chatbots, offer a range of applications in healthcare education, from idea generation to content enhancement. However, effective use of these tools requires a clear understanding of prompt engineering. Crafting well-structured prompts ensures the AI tool, produces relevant and accurate outputs.
Key elements of an effective prompt include:
- Clarity - be direct and use clear, unambiguous language.
- Alignment with objectives - Make sure that you clearly describe the aim of what you want the Gen AI tool to deliver in its response.
- Target audience awareness - Tailor the prompt to the identified need, your desired outcomes and the target audience.
- Context - provide sufficient background information to guide the AI tool.
2. Structured Prompt Frameworks
Frameworks including COSTAR, CLEAR and SMART provide structured approaches to crafting effective prompts (Medium, 2024):
- COSTAR: Focuses on Context, Objective, Scope, Tone, Audience, and Result.
- CLEAR: Emphasises Conciseness, Logic, Explicitness, Adaptiveness, and Reflectiveness.
- SMART: Targets Specific, Measurable, Achievable, Relevant, and Time-bound Outcomes.
To explore examples of prompts for content creation in nursing education, refer to Prompt Engineering for Nurse Educators (Sun, 2024).
Addressing Bias and Ensuring Inclusive Practices
Despite its potential, AI is not flawless. As discussed in the above section on limitations and ethical considerations, biases in pre-existing datasets can lead to exclusive or inequitable content, as AI lacks situational awareness and the ability to contextualise real-world experiences (Agarwal et al., 2023; Christensen et al., 2021; Obermeyer et al., 2019). To mitigate these risks, educators must encourage the creation of organisational policies requiring human oversight and ethical use of AI tools in education.
Organisations should establish guidelines for AI deployment, ensuring its role as an adjunct to human instruction rather than a replacement (Foltynek et al., 2023). These policies should address data security, accuracy, and content integrity while outlining educators' responsibilities in leveraging AI.
The Australian Government Style Manual contains a section on inclusive language. This wonderful tool can help anyone, particularly healthcare education teams, ensure that any written material, education-related or otherwise, uses language that is culturally appropriate and respectful of the diversity of Australia’s people.
The Future of AI in Healthcare Education
AI's integration into healthcare education presents both opportunities and challenges. Through professional development, educators can build the knowledge and skills needed to use AI effectively and ethically. Collaboration with key stakeholders to develop comprehensive policies will support secure and equitable AI implementation, preserving the critical role of educators in fostering inclusive learning environments.
By intentionally embracing AI, healthcare educators can drive innovation while maintaining content integrity and promoting equitable practices. This approach ensures that AI serves as a constructive tool, empowering healthcare professionals to navigate the complexities of modern healthcare with competence.
References
Agarwal. R., Bjarnadottir, M., Rhue, L., Duggs, M., Crowley, K., Clark, J., & Gao, G. (2023). Addressing algorithmic bias and perpetuating health inequities: An AI bias aware framework. Health Policy and Technology, 12(1), https://doi.org/10.1016/j.hlpt.2022.100702
Christensen, D. M., J. Manley, and J. Resendez. (2021). Medical algorithms are failing communities of color. Health Affairs Forefront. https://doi.org/10.1377/forefront.20210903.976632
Coursera Staff. (2024a). What does AI stand for? Coursera. https://www.coursera.org/articles/what-does-ai-stand-for
Coursera Staff. (2024b). What is generative AI? Definition, applications, and impact. Coursera. https://www.coursera.org/articles/what-is-generative-ai
Foltynek. T., Bjelobaba, S., Glendinning, I., Khan, Z. R., Santos, R., Pavletic, P., & Kravjar, J. (2023). ENAI recommendations on the ethical use of Artificial Intelligence in education. International Journal for Educational Integrity, 19(12), 1-4. https://doi.org/10.1007/s40979-023-00133-4
Authors
Dr. Jennifer Bodine
Dr. Jennifer Bodine brings over 19 years of diverse nursing experience, including more than 13 years specialising in nursing professional development. She serves as an Education Manager, Accredited Provider Program Director, and appraiser for ANCC’s Nursing Continuing Professional Development (NCPD) accreditation program.
Dr. Bodine holds advanced certification in Nursing Professional Development and a certificate in Artificial Intelligence: Business Strategies and Applications from UC Berkeley. She earned her Doctor of Nursing Practice (DNP) degree from California State University and completed a postdoctoral fellowship with the Fuld Institute for Evidence-Based Practice.
She is the Preceptorship Column Editor for the Journal for Nurses in Professional Development and co-hosts the Disruptor Diaries: Healthcare Education Innovation Unleashed podcast. Additionally, she is the co-founder of J&J ElevatED Consulting, LLC, where she continues to advance innovation in nursing education and leadership.
Jillian Russell
Jillian Russell has 13 years of dynamic nursing professional development experience. Certified in NPD since 2015, she is an experienced Accredited Provider Program Director and appraiser for ANCC’s NCPD accreditation program, co-founder of J&J ElevatED Consulting LLC, and part of the NPD Collaborative co-hosting the Disruptor Diaries: Healthcare Education Innovation Unleashed podcast. Jillian is passionate about using innovative solutions to address complex challenges, streamline processes, and showcase the value of nursing professional development.