Ying-Mei Wang, Hung-Wei Shen, Tzeng-Ji Chen, Shu-Chiung Chiang, Ting-Guan Lin
{"title":"Performance of ChatGPT-3.5 and ChatGPT-4 in the Taiwan National Pharmacist Licensing Examination: Comparative Evaluation Study.","authors":"Ying-Mei Wang, Hung-Wei Shen, Tzeng-Ji Chen, Shu-Chiung Chiang, Ting-Guan Lin","doi":"10.2196/56850","DOIUrl":"10.2196/56850","url":null,"abstract":"<p><strong>Background: </strong>OpenAI released versions ChatGPT-3.5 and GPT-4 between 2022 and 2023. GPT-3.5 has demonstrated proficiency in various examinations, particularly the United States Medical Licensing Examination. However, GPT-4 has more advanced capabilities.</p><p><strong>Objective: </strong>This study aims to examine the efficacy of GPT-3.5 and GPT-4 within the Taiwan National Pharmacist Licensing Examination and to ascertain their utility and potential application in clinical pharmacy and education.</p><p><strong>Methods: </strong>The pharmacist examination in Taiwan consists of 2 stages: basic subjects and clinical subjects. In this study, exam questions were manually fed into the GPT-3.5 and GPT-4 models, and their responses were recorded; graphic-based questions were excluded. This study encompassed three steps: (1) determining the answering accuracy of GPT-3.5 and GPT-4, (2) categorizing question types and observing differences in model performance across these categories, and (3) comparing model performance on calculation and situational questions. Microsoft Excel and R software were used for statistical analyses.</p><p><strong>Results: </strong>GPT-4 achieved an accuracy rate of 72.9%, overshadowing GPT-3.5, which achieved 59.1% (P<.001). In the basic subjects category, GPT-4 significantly outperformed GPT-3.5 (73.4% vs 53.2%; P<.001). However, in clinical subjects, only minor differences in accuracy were observed. Specifically, GPT-4 outperformed GPT-3.5 in the calculation and situational questions.</p><p><strong>Conclusions: </strong>This study demonstrates that GPT-4 outperforms GPT-3.5 in the Taiwan National Pharmacist Licensing Examination, particularly in basic subjects. While GPT-4 shows potential for use in clinical practice and pharmacy education, its limitations warrant caution. Future research should focus on refining prompts, improving model stability, integrating medical databases, and designing questions that better assess student competence and minimize guessing.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e56850"},"PeriodicalIF":3.2,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11769692/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143047333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Teng, Youran Xu, Kaoliang Qian, Ming Lu, Jun Hu
{"title":"Case-Based Virtual Reality Simulation for Severe Pelvic Trauma Clinical Skill Training in Medical Students: Design and Pilot Study.","authors":"Peng Teng, Youran Xu, Kaoliang Qian, Ming Lu, Jun Hu","doi":"10.2196/59850","DOIUrl":"10.2196/59850","url":null,"abstract":"<p><strong>Background: </strong>Teaching severe pelvic trauma poses a significant challenge in orthopedic surgery education due to the necessity of both clinical reasoning and procedural operational skills for mastery. Traditional methods of instruction, including theoretical teaching and mannequin practice, face limitations due to the complexity, the unpredictability of treatment scenarios, the scarcity of typical cases, and the abstract nature of traditional teaching, all of which impede students' knowledge acquisition.</p><p><strong>Objective: </strong>This study aims to introduce a novel experimental teaching methodology for severe pelvic trauma, integrating virtual reality (VR) technology as a potent adjunct to existing teaching practices. It evaluates the acceptability, perceived ease of use, and perceived usefulness among users and investigates its impact on knowledge, skills, and confidence in managing severe pelvic trauma before and after engaging with the software.</p><p><strong>Methods: </strong>A self-designed questionnaire was distributed to 40 students, and qualitative interviews were conducted with 10 teachers to assess the applicability and acceptability. A 1-group pretest-posttest design was used to evaluate learning outcomes across various domains, including diagnosis and treatment, preliminary diagnosis, disease treatment sequencing, emergency management of hemorrhagic shock, and external fixation of pelvic fractures.</p><p><strong>Results: </strong>A total of 40 students underwent training, with 95% (n=38) affirming that the software effectively simulated real-patient scenarios. All participants (n=40, 100%) reported that completing the simulation necessitated making the same decisions as doctors in real life and found the VR simulation interesting and useful. Teacher interviews revealed that 90% (9/10) recognized the VR simulation's ability to replicate complex clinical cases, resulting in enhanced training effectiveness. Notably, there was a significant improvement in the overall scores for managing hemorrhagic shock (t<sub>39</sub>=37.6; 95% CI 43.6-48.6; P<.001) and performing external fixation of pelvic fractures (t<sub>39</sub>=24.1; 95% CI 53.4-63.3; P<.001) from pre- to postsimulation.</p><p><strong>Conclusions: </strong>The introduced case-based VR simulation of skill-training methodology positively influences medical students' clinical reasoning, operative skills, and self-confidence. It offers an efficient strategy for conserving resources while providing quality education for both educators and learners.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e59850"},"PeriodicalIF":3.2,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11786138/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transforming Medical Education to Make Patient Safety Part of the Genome of a Modern Health Care Worker.","authors":"Peter Lachman, John Fitzsimons","doi":"10.2196/68046","DOIUrl":"10.2196/68046","url":null,"abstract":"<p><strong>Unlabelled: </strong>Medical education has not traditionally recognized patient safety as a core subject. To foster a culture of patient safety and enhance psychological safety, it is essential to address the barriers and facilitators that currently impact the development and delivery of medical education curricula. The aim of including patient safety and psychological safety competencies in education curricula is to insert these into the genome of the modern health care worker.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e68046"},"PeriodicalIF":3.2,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758993/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance Evaluation and Implications of Large Language Models in Radiology Board Exams: Prospective Comparative Analysis.","authors":"Boxiong Wei","doi":"10.2196/64284","DOIUrl":"10.2196/64284","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence advancements have enabled large language models to significantly impact radiology education and diagnostic accuracy.</p><p><strong>Objective: </strong>This study evaluates the performance of mainstream large language models, including GPT-4, Claude, Bard, Tongyi Qianwen, and Gemini Pro, in radiology board exams.</p><p><strong>Methods: </strong>A comparative analysis of 150 multiple-choice questions from radiology board exams without images was conducted. Models were assessed on their accuracy for text-based questions and were categorized by cognitive levels and medical specialties using χ2 tests and ANOVA.</p><p><strong>Results: </strong>GPT-4 achieved the highest accuracy (83.3%, 125/150), significantly outperforming all other models. Specifically, Claude achieved an accuracy of 62% (93/150; P<.001), Bard 54.7% (82/150; P<.001), Tongyi Qianwen 70.7% (106/150; P=.009), and Gemini Pro 55.3% (83/150; P<.001). The odds ratios compared to GPT-4 were 0.33 (95% CI 0.18-0.60) for Claude, 0.24 (95% CI 0.13-0.44) for Bard, and 0.25 (95% CI 0.14-0.45) for Gemini Pro. Tongyi Qianwen performed relatively well with an accuracy of 70.7% (106/150; P=0.02) and had an odds ratio of 0.48 (95% CI 0.27-0.87) compared to GPT-4. Performance varied across question types and specialties, with GPT-4 excelling in both lower-order and higher-order questions, while Claude and Bard struggled with complex diagnostic questions.</p><p><strong>Conclusions: </strong>GPT-4 and Tongyi Qianwen show promise in medical education and training. The study emphasizes the need for domain-specific training datasets to enhance large language models' effectiveness in specialized fields like radiology.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e64284"},"PeriodicalIF":3.2,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11756834/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vanesa Ramos-García, Amado Rivero-Santana, Wenceslao Peñate-Castro, Yolanda Álvarez-Pérez, Andrea Duarte-Díaz, Alezandra Torres-Castaño, María Del Mar Trujillo-Martín, Ana Isabel González-González, Pedro Serrano-Aguilar, Lilisbeth Perestelo-Pérez
{"title":"A Brief Web-Based Person-Centered Care Group Training Program for the Management of Generalized Anxiety Disorder: Feasibility Randomized Controlled Trial in Spain.","authors":"Vanesa Ramos-García, Amado Rivero-Santana, Wenceslao Peñate-Castro, Yolanda Álvarez-Pérez, Andrea Duarte-Díaz, Alezandra Torres-Castaño, María Del Mar Trujillo-Martín, Ana Isabel González-González, Pedro Serrano-Aguilar, Lilisbeth Perestelo-Pérez","doi":"10.2196/50060","DOIUrl":"10.2196/50060","url":null,"abstract":"<p><strong>Background: </strong>Shared decision-making (SDM) is a crucial aspect of patient-centered care. While several SDM training programs for health care professionals have been developed, evaluation of their effectiveness is scarce, especially in mental health disorders such as generalized anxiety disorder.</p><p><strong>Objective: </strong>This study aims to assess the feasibility and impact of a brief training program on the attitudes toward SDM among primary care professionals who attend to patients with generalized anxiety disorder.</p><p><strong>Methods: </strong>A feasibility randomized controlled trial was conducted. Health care professionals recruited in primary care centers were randomized to an intervention group (training program) or a control group (waiting list). The intervention consisted of 2 web-based sessions applied by 2 psychologists (VR and YA), based on the integrated elements of the patient-centered care model and including group dynamics and video viewing. The outcome variable was the Leeds Attitudes Towards Concordance scale, second version (LATCon II), assessed at baseline and after the second session (3 months). After the randomized controlled trial phase, the control group also received the intervention and was assessed again.</p><p><strong>Results: </strong>Among 28 randomized participants, 5 withdrew before the baseline assessment. The intervention significantly increased their scores compared with the control group in the total scale (b=0.57; P=.018) and 2 subscales: communication or empathy (b=0.74; P=.036) and shared control (ie, patient participation in decisions: b=0.68; P=.040). The control group also showed significant pre-post changes after receiving the intervention.</p><p><strong>Conclusions: </strong>For a future effectiveness trial, it is necessary to improve the recruitment and retention strategies. The program produced a significant improvement in participants' attitude toward the SDM model, but due to this study's limitations, mainly the small sample size, more research is warranted.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e50060"},"PeriodicalIF":3.2,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11756839/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of an Interdisciplinary Educational Program to Foster Learning Health Systems: Education Evaluation.","authors":"Sathana Dushyanthen, Nadia Izzati Zamri, Wendy Chapman, Daniel Capurro, Kayley Lyons","doi":"10.2196/54152","DOIUrl":"10.2196/54152","url":null,"abstract":"<p><strong>Background: </strong>Learning health systems (LHS) have the potential to use health data in real time through rapid and continuous cycles of data interrogation, implementing insights to practice, feedback, and practice change. However, there is a lack of an appropriately skilled interprofessional informatics workforce that can leverage knowledge to design innovative solutions. Therefore, there is a need to develop tailored professional development training in digital health, to foster skilled interprofessional learning communities in the health care workforce in Australia.</p><p><strong>Objective: </strong>This study aimed to explore participants' experiences and perspectives of participating in an interprofessional education program over 13 weeks. The evaluation also aimed to assess the benefits, barriers, and opportunities for improvements and identify future applications of the course materials.</p><p><strong>Methods: </strong>We developed a wholly online short course open to interdisciplinary professionals working in digital health in the health care sector. In a flipped classroom model, participants (n=400) undertook 2 hours of preclass learning online and then attended 2.5 hours of live synchronous learning in interactive weekly Zoom workshops for 13 weeks. Throughout the course, they collaborated in small, simulated learning communities (n=5 to 8), engaging in various activities and problem-solving exercises, contributing their unique perspectives and diverse expertise. The course covered a number of topics including background on LHS, establishing learning communities, the design thinking process, data preparation and machine learning analysis, process modeling, clinical decision support, remote patient monitoring, evaluation, implementation, and digital transformation. To evaluate the purpose of the program, we undertook a mixed methods evaluation consisting of pre- and postsurveys rating scales for usefulness, engagement, value, and applicability for various aspects of the course. Participants also completed identical measures of self-efficacy before and after (n=200), with scales mapped to specific skills and tasks that should have been achievable following each of the topics covered. Further, they undertook voluntary weekly surveys to provide feedback on which aspects to continue and recommendations for improvements, via free-text responses.</p><p><strong>Results: </strong>From the evaluation, it was evident that participants found the teaching model engaging, useful, valuable, and applicable to their work. In the self-efficacy component, we observed a significant increase (P<.001) in perceived confidence for all topics, when comparing pre- and postcourse ratings. Overall, it was evident that the program gave participants a framework to organize their knowledge and a common understanding and shared language to converse with other disciplines, changed the way they perceived their role and the possibilities of data and techno","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e54152"},"PeriodicalIF":3.2,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11757970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Factors Associated With the Accuracy of Large Language Models in Basic Medical Science Examinations: Cross-Sectional Study.","authors":"Naritsaret Kaewboonlert, Jiraphon Poontananggul, Natthipong Pongsuwan, Gun Bhakdisongkhram","doi":"10.2196/58898","DOIUrl":"10.2196/58898","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) has become widely applied across many fields, including medical education. Content validation and its answers are based on training datasets and the optimization of each model. The accuracy of large language model (LLMs) in basic medical examinations and factors related to their accuracy have also been explored.</p><p><strong>Objective: </strong>We evaluated factors associated with the accuracy of LLMs (GPT-3.5, GPT-4, Google Bard, and Microsoft Bing) in answering multiple-choice questions from basic medical science examinations.</p><p><strong>Methods: </strong>We used questions that were closely aligned with the content and topic distribution of Thailand's Step 1 National Medical Licensing Examination. Variables such as the difficulty index, discrimination index, and question characteristics were collected. These questions were then simultaneously input into ChatGPT (with GPT-3.5 and GPT-4), Microsoft Bing, and Google Bard, and their responses were recorded. The accuracy of these LLMs and the associated factors were analyzed using multivariable logistic regression. This analysis aimed to assess the effect of various factors on model accuracy, with results reported as odds ratios (ORs).</p><p><strong>Results: </strong>The study revealed that GPT-4 was the top-performing model, with an overall accuracy of 89.07% (95% CI 84.76%-92.41%), significantly outperforming the others (P<.001). Microsoft Bing followed with an accuracy of 83.69% (95% CI 78.85%-87.80%), GPT-3.5 at 67.02% (95% CI 61.20%-72.48%), and Google Bard at 63.83% (95% CI 57.92%-69.44%). The multivariable logistic regression analysis showed a correlation between question difficulty and model performance, with GPT-4 demonstrating the strongest association. Interestingly, no significant correlation was found between model accuracy and question length, negative wording, clinical scenarios, or the discrimination index for most models, except for Google Bard, which showed varying correlations.</p><p><strong>Conclusions: </strong>The GPT-4 and Microsoft Bing models demonstrated equal and superior accuracy compared to GPT-3.5 and Google Bard in the domain of basic medical science. The accuracy of these models was significantly influenced by the item's difficulty index, indicating that the LLMs are more accurate when answering easier questions. This suggests that the more accurate models, such as GPT-4 and Bing, can be valuable tools for understanding and learning basic medical science concepts.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e58898"},"PeriodicalIF":3.2,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11745146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143024939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anwar Rjoop, Mohammad Al-Qudah, Raja Alkhasawneh, Nesreen Bataineh, Maram Abdaljaleel, Moayad A Rjoub, Mustafa Alkhateeb, Mohammad Abdelraheem, Salem Al-Omari, Omar Bani-Mari, Anas Alkabalan, Saoud Altulaih, Iyad Rjoub, Rula Alshimi
{"title":"Awareness and Attitude Toward Artificial Intelligence Among Medical Students and Pathology Trainees: Survey Study.","authors":"Anwar Rjoop, Mohammad Al-Qudah, Raja Alkhasawneh, Nesreen Bataineh, Maram Abdaljaleel, Moayad A Rjoub, Mustafa Alkhateeb, Mohammad Abdelraheem, Salem Al-Omari, Omar Bani-Mari, Anas Alkabalan, Saoud Altulaih, Iyad Rjoub, Rula Alshimi","doi":"10.2196/62669","DOIUrl":"10.2196/62669","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) is set to shape the future of medical practice. The perspective and understanding of medical students are critical for guiding the development of educational curricula and training.</p><p><strong>Objective: </strong>This study aims to assess and compare medical AI-related attitudes among medical students in general medicine and in one of the visually oriented fields (pathology), along with illuminating their anticipated role of AI in the rapidly evolving landscape of AI-enhanced health care.</p><p><strong>Methods: </strong>This was a cross-sectional study that used a web-based survey composed of a closed-ended questionnaire. The survey addressed medical students at all educational levels across the 5 public medical schools, along with pathology residents in 4 residency programs in Jordan.</p><p><strong>Results: </strong>A total of 394 respondents participated (328 medical students and 66 pathology residents). The majority of respondents (272/394, 69%) were already aware of AI and deep learning in medicine, mainly relying on websites for information on AI, while only 14% (56/394) were aware of AI through medical schools. There was a statistically significant difference in awareness among respondents who consider themselves tech experts compared with those who do not (P=.03). More than half of the respondents believed that AI could be used to diagnose diseases automatically (213/394, 54.1% agreement), with medical students agreeing more than pathology residents (P=.04). However, more than one-third expressed fear about recent AI developments (167/394, 42.4% agreed). Two-thirds of respondents disagreed that their medical schools had educated them about AI and its potential use (261/394, 66.2% disagreed), while 46.2% (182/394) expressed interest in learning about AI in medicine. In terms of pathology-specific questions, 75.4% (297/394) agreed that AI could be used to identify pathologies in slide examinations automatically. There was a significant difference between medical students and pathology residents in their agreement (P=.001). Overall, medical students and pathology trainees had similar responses.</p><p><strong>Conclusions: </strong>AI education should be introduced into medical school curricula to improve medical students' understanding and attitudes. Students agreed that they need to learn about AI's applications, potential hazards, and legal and ethical implications. This is the first study to analyze medical students' views and awareness of AI in Jordan, as well as the first to include pathology residents' perspectives. The findings are consistent with earlier research internationally. In comparison with prior research, these attitudes are similar in low-income and industrialized countries, highlighting the need for a global strategy to introduce AI instruction to medical students everywhere in this era of rapidly expanding technology.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e62669"},"PeriodicalIF":3.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11741511/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142972439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michelle Mun, Samantha Byrne, Louise Shaw, Kayley Lyons
{"title":"Digital Dentists: A Curriculum for the 21st Century.","authors":"Michelle Mun, Samantha Byrne, Louise Shaw, Kayley Lyons","doi":"10.2196/54153","DOIUrl":"10.2196/54153","url":null,"abstract":"<p><strong>Unlabelled: </strong>Future health professionals, including dentists, must critically engage with digital health technologies to enhance patient care. While digital health is increasingly being integrated into the curricula of health professions, its interpretation varies widely depending on the discipline, health care setting, and local factors. This viewpoint proposes a structured set of domains to guide the designing of a digital health curriculum tailored to the unique needs of dentistry in Australia. The paper aims to share a premise for curriculum development that aligns with the current evidence and the national digital health strategy, serving as a foundation for further discussion and implementation in dental programs.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e54153"},"PeriodicalIF":3.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11735848/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142956197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Medical Student Engagement Through Cinematic Clinical Narratives: Multimodal Generative AI-Based Mixed Methods Study.","authors":"Tyler Bland","doi":"10.2196/63865","DOIUrl":"10.2196/63865","url":null,"abstract":"<p><strong>Background: </strong>Medical students often struggle to engage with and retain complex pharmacology topics during their preclinical education. Traditional teaching methods can lead to passive learning and poor long-term retention of critical concepts.</p><p><strong>Objective: </strong>This study aims to enhance the teaching of clinical pharmacology in medical school by using a multimodal generative artificial intelligence (genAI) approach to create compelling, cinematic clinical narratives (CCNs).</p><p><strong>Methods: </strong>We transformed a standard clinical case into an engaging, interactive multimedia experience called \"Shattered Slippers.\" This CCN used various genAI tools for content creation: GPT-4 for developing the storyline, Leonardo.ai and Stable Diffusion for generating images, Eleven Labs for creating audio narrations, and Suno for composing a theme song. The CCN integrated narrative styles and pop culture references to enhance student engagement. It was applied in teaching first-year medical students about immune system pharmacology. Student responses were assessed through the Situational Interest Survey for Multimedia and examination performance. The target audience comprised first-year medical students (n=40), with 18 responding to the Situational Interest Survey for Multimedia survey (n=18).</p><p><strong>Results: </strong>The study revealed a marked preference for the genAI-enhanced CCNs over traditional teaching methods. Key findings include the majority of surveyed students preferring the CCN over traditional clinical cases (14/18), as well as high average scores for triggered situational interest (mean 4.58, SD 0.53), maintained interest (mean 4.40, SD 0.53), maintained-feeling interest (mean 4.38, SD 0.51), and maintained-value interest (mean 4.42, SD 0.54). Students achieved an average score of 88% on examination questions related to the CCN material, indicating successful learning and retention. Qualitative feedback highlighted increased engagement, improved recall, and appreciation for the narrative style and pop culture references.</p><p><strong>Conclusions: </strong>This study demonstrates the potential of using a multimodal genAI-driven approach to create CCNs in medical education. The \"Shattered Slippers\" case effectively enhanced student engagement and promoted knowledge retention in complex pharmacological topics. This innovative method suggests a novel direction for curriculum development that could improve learning outcomes and student satisfaction in medical education. Future research should explore the long-term retention of knowledge and the applicability of learned material in clinical settings, as well as the potential for broader implementation of this approach across various medical education contexts.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e63865"},"PeriodicalIF":3.2,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11751740/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142956201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}