Daniel Cabrera Lozoya, Mike Conway, Edoardo Sebastiano De Duro, Simon D'Alfonso
{"title":"Leveraging Large Language Models for Simulated Psychotherapy Client Interactions: Development and Usability Study of Client101.","authors":"Daniel Cabrera Lozoya, Mike Conway, Edoardo Sebastiano De Duro, Simon D'Alfonso","doi":"10.2196/68056","DOIUrl":"10.2196/68056","url":null,"abstract":"<p><strong>Background: </strong>In recent years, large language models (LLMs) have shown a remarkable ability to generate human-like text. One potential application of this capability is using LLMs to simulate clients in a mental health context. This research presents the development and evaluation of Client101, a web conversational platform featuring LLM-driven chatbots designed to simulate mental health clients.</p><p><strong>Objective: </strong>We aim to develop and test a web-based conversational psychotherapy training tool designed to closely resemble clients with mental health issues.</p><p><strong>Methods: </strong>We used GPT-4 and prompt engineering techniques to develop chatbots that simulate realistic client conversations. Two chatbots were created based on clinical vignette cases: one representing a person with depression and the other, a person with generalized anxiety disorder. A total of 16 mental health professionals were instructed to conduct single sessions with the chatbots using a cognitive behavioral therapy framework; a total of 15 sessions with the anxiety chatbot and 14 with the depression chatbot were completed. After each session, participants completed a 19-question survey assessing the chatbot's ability to simulate the mental health condition and its potential as a training tool. Additionally, we used the LIWC (Linguistic Inquiry and Word Count) tool to analyze the psycholinguistic features of the chatbot conversations related to anxiety and depression. These features were compared to those in a set of webchat psychotherapy sessions with human clients-42 sessions related to anxiety and 47 related to depression-using an independent samples t test.</p><p><strong>Results: </strong>Participants' survey responses were predominantly positive regarding the chatbots' realism and portrayal of mental health conditions. For instance, 93% (14/15) considered that the chatbot provided a coherent and convincing narrative typical of someone with an anxiety condition. The statistical analysis of LIWC psycholinguistic features revealed significant differences between chatbot and human therapy transcripts for 3 of 8 anxiety-related features: negations (t56=4.03, P=.001), family (t56=-8.62, P=.001), and negative emotions (t56=-3.91, P=.002). The remaining 5 features-sadness, personal pronouns, present focus, social, and anger-did not show significant differences. For depression-related features, 4 of 9 showed significant differences: negative emotions (t60=-3.84, P=.003), feeling (t60=-6.40, P<.001), health (t60=-4.13, P=.001), and illness (t60=-5.52, P<.001). The other 5 features-sadness, anxiety, mental, first-person pronouns, and discrepancy-did not show statistically significant differences.</p><p><strong>Conclusions: </strong>This research underscores both the strengths and limitations of using GPT-4-powered chatbots as tools for psychotherapy training. Participant feedback suggests that the chatbots effectively portray mental health","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e68056"},"PeriodicalIF":3.2,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12312989/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144761575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resident Physician Recognition of Tachypnea in Clinical Simulation Videos in Japan: Cross-Sectional Study.","authors":"Kiyoshi Shikino, Yuji Nishizaki, Sho Fukui, Koshi Kataoka, Daiki Yokokawa, Taro Shimizu, Yu Yamamoto, Kazuya Nagasaki, Hiroyuki Kobayashi, Yasuharu Tokuda","doi":"10.2196/72640","DOIUrl":"10.2196/72640","url":null,"abstract":"<p><strong>Background: </strong>Traditional assessments of clinical competence using multiple-choice questions (MCQs) have limitations in the evaluation of real-world diagnostic abilities. As such, recognizing non-verbal cues, like tachypnea, is crucial for accurate diagnosis and effective patient care.</p><p><strong>Objective: </strong>This study aimed to evaluate how detecting such cues impacts the clinical competence of resident physicians by using a clinical simulation video integrated into the General Medicine In-Training Examination (GM-ITE).</p><p><strong>Methods: </strong>This multicenter cross-sectional study enrolled first- and second-year resident physicians who participated in the GM-ITE 2022. Participants watched a 5-minute clinical simulation video depicting a patient with acute pulmonary thromboembolism, and subsequently answered diagnostic questions. Propensity score matching was applied to create balanced groups of resident physicians who detected tachypnea (ie, the detection group) and those who did not (ie, the non-detection group). After matching, we compared the GM-ITE scores and the proportion of correct clinical simulation video answers between the two groups. Subgroup analyses assessed the consistency between results.</p><p><strong>Results: </strong>In total, 5105 resident physicians were included, from which 959 pairs were identified after the clinical simulation video. Covariates were well balanced between the detection and non-detection groups (standardized mean difference <0.1 for all variables). Post-matching, the detection group achieved significantly higher GM-ITE scores (mean [SD], 47.6 [8.4]) than the non-detection group (mean [SD], 45.7 [8.1]; mean difference, 1.9; 95% CI, 1.1-2.6; P=.041). The proportion of correct clinical simulation video answers was also significantly higher in the detection group (39.2% vs 3.0%; mean difference, 36.2%; 95% CI, 32.8-39.4). Subgroup analyses confirmed consistent results across sex, postgraduate years, and age groups.</p><p><strong>Conclusions: </strong>Overall, this study revealed that detecting non-verbal cues like tachypnea significantly affects clinical competence, as evidenced by higher GM-ITE scores among resident physicians. Integrating video-based simulations into traditional MCQ examinations enhances the assessment of diagnostic skills by providing a more comprehensive evaluation of clinical abilities. Thus, recognizing non-verbal cues is crucial for clinical competence. Video-based simulations offer a valuable addition to traditional knowledge assessments by improving the diagnostic skills and preparedness of clinicians.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e72640"},"PeriodicalIF":3.2,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12313080/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144761576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Xu, Xichuan Deng, Tingting Chen, Nan Lu, Yuran Wang, Jia Liu, Yanan Guo, Zeng Tu, Yuxin Nie, Yeganeh Hosseini, Yonglin He
{"title":"A Large-Scale Multispecialty Evaluation of Web-Based Simulation in Medical Microbiology Laboratory Education: Randomized Controlled Trial.","authors":"Lei Xu, Xichuan Deng, Tingting Chen, Nan Lu, Yuran Wang, Jia Liu, Yanan Guo, Zeng Tu, Yuxin Nie, Yeganeh Hosseini, Yonglin He","doi":"10.2196/72495","DOIUrl":"10.2196/72495","url":null,"abstract":"<p><strong>Background: </strong>Traditional laboratory teaching of pathogenic cocci faces challenges in biosafety and standardization across medical specialties. While virtual simulation shows promise, evidence from large-scale, multidisciplinary studies remains limited.</p><p><strong>Objective: </strong>The study aims to evaluate the effectiveness of integrating virtual simulation with traditional laboratory practice in enhancing medical microbiology education, focusing on the identification of biosafety level 2 pathogenic cocci. The study assessed improvements in student performance, theoretical understanding, laboratory safety, and overall satisfaction, while achieving standardization and cost reduction across multiple medical specialties.</p><p><strong>Methods: </strong>This randomized controlled trial involved 1282 medical students from 9 specialties. The experimental group (n=653) received virtual simulation training-featuring interactivity and intelligent feedback-prior to traditional laboratory practice, while the control group (n=629) did not receive such training. Our virtual system focused on biosafety level 2 pathogenic cocci identification with dynamic specimen generation.</p><p><strong>Results: </strong>The experimental group showed significantly improved performance across specialties (P<.05 for each specialty), particularly in clinical medicine, in which the experimental group score was 89.88 (SD 13.09) and the control group score was 68.34 (SD 17.23; P<.001). The students reported that virtual simulation enhanced their theoretical understanding (1268/1282, 98.9%) and laboratory safety (1164/1282, 90.8%) while helping them achieve standardization (790/1282, 61.6%,) and cost reduction (957/1282, 74.6%). Overall student satisfaction reached 97.2% (1246/1282), with distinct learning patterns observed across specialties. The test scores were significantly higher in the experimental group, with a mean of 80.82 (SD 17.10), compared to the control group, with a mean of 67.45 (SD 16.81).</p><p><strong>Conclusions: </strong>This large-scale study demonstrates that integrating virtual simulation with traditional methods effectively enhances medical microbiology education, providing a standardized, safe, and cost-effective approach for teaching high-risk pathogenic experiments.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e72495"},"PeriodicalIF":3.2,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12310184/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144754631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Role of Artificial Intelligence in Surgical Training by Assessing GPT-4 and GPT-4o on the Japan Surgical Board Examination With Text-Only and Image-Accompanied Questions: Performance Evaluation Study.","authors":"Hiroki Maruyama, Yoshitaka Toyama, Kentaro Takanami, Kei Takase, Takashi Kamei","doi":"10.2196/69313","DOIUrl":"10.2196/69313","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence and large language models (LLMs)-particularly GPT-4 and GPT-4o-have demonstrated high correct-answer rates in medical examinations. GPT-4o has enhanced diagnostic capabilities, advanced image processing, and updated knowledge. Japanese surgeons face critical challenges, including a declining workforce, regional health care disparities, and work-hour-related challenges. Nonetheless, although LLMs could be beneficial in surgical education, no studies have yet assessed GPT-4o's surgical knowledge or its performance in the field of surgery.</p><p><strong>Objective: </strong>This study aims to evaluate the potential of GPT-4 and GPT-4o in surgical education by using them to take the Japan Surgical Board Examination (JSBE), which includes both textual questions and medical images-such as surgical and computed tomography scans-to comprehensively assess their surgical knowledge.</p><p><strong>Methods: </strong>We used 297 multiple-choice questions from the 2021-2023 JSBEs. The questions were in Japanese, and 104 of them included images. First, the GPT-4 and GPT-4o responses to only the textual questions were collected via OpenAI's application programming interface to evaluate their correct-answer rate. Subsequently, the correct-answer rate of their responses to questions that included images was assessed by inputting both text and images.</p><p><strong>Results: </strong>The overall correct-answer rates of GPT-4o and GPT-4 for the text-only questions were 78% (231/297) and 55% (163/297), respectively, with GPT-4o outperforming GPT-4 by 23% (P=<.01). By contrast, there was no significant improvement in the correct-answer rate for questions that included images compared with the results for the text-only questions.</p><p><strong>Conclusions: </strong>GPT-4o outperformed GPT-4 on the JSBE. However, the results of the LLMs were lower than those of the examinees. Despite the capabilities of LLMs, image recognition remains a challenge for them, and their clinical application requires caution owing to the potential inaccuracy of their results.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e69313"},"PeriodicalIF":3.2,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12310146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144754632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmed Zaky, Aisha Waheed, Brittany Hatter, Srilakshmi Malempati, Sai Hemanth Maremalla, Ragib Hasan, Yuliang Zheng, Scott Snyder
{"title":"Cardiac Implantable Electronic Device Educational Application for Cardiac Anesthesiology Trainees: Tutorial on App Development.","authors":"Ahmed Zaky, Aisha Waheed, Brittany Hatter, Srilakshmi Malempati, Sai Hemanth Maremalla, Ragib Hasan, Yuliang Zheng, Scott Snyder","doi":"10.2196/60087","DOIUrl":"10.2196/60087","url":null,"abstract":"<p><strong>Unlabelled: </strong>Despite the exposure of cardiothoracic anesthesiology trainees to patients with cardiac implantable electronic devices (CIEDs), there is a paucity of formal curricula on this subject. Major impediments to educating cardiothoracic anesthesiology trainees on CIEDs include busy clinical schedules, short staffing, inconsistent trainees' exposure to CIEDs, multiplicity of vendors, and a \"millennial\" mentality of the new generation of learners. As a result, cardiothoracic anesthesiology trainees graduating from their residency and fellowship programs may lack the competency to manage patients with CIEDs. Herein, we report our systematic approach to designing, validating, mapping, evaluating, and delivering a CIED curriculum on the first mobile app of its kind on this subject. Development of the CIED curriculum proceeded through the Kern 6-step approach of problem identification, determining and prioritizing content, writing goals and objectives, selecting instructional strategies, implementation of the material, and evaluation and applications of lessons learned. This was followed by the delivery of the curriculum in the form of a user-study app and administrator-type app with functionalities in the assessment of the learners' gains, experience, and satisfaction as well as the administrator's capability to update the educational content based on the feedback of the learners and the emerging technology. As such, the CIED app allows asynchronous learning at the pace of the learners and allows, through a multiplicity of educational materials, the ability to digest this complex and understudied subject. We report on the pilot phase of the project. We benefit from the experience of a multidisciplinary team of anesthesiologists, computer scientists, and educators in accomplishing this project.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e60087"},"PeriodicalIF":3.2,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12306915/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Alignment Between Classroom Education and Clinical Practice of Root Canal Treatment Among Dental Practitioners in China: Cross-Sectional Study.","authors":"XinYue Ma, JingShi Huang","doi":"10.2196/65534","DOIUrl":"10.2196/65534","url":null,"abstract":"<p><strong>Background: </strong>This cross-sectional study assessed the perceived alignment between preclinical education and clinical practice in root canal treatment (RCT) among dental practitioners in China, aiming to identify systemic gaps in dental curricula and their clinical implications.</p><p><strong>Objective: </strong>Dental professionals in Eastern Coastal China. This study distributed questionnaires through hospital dental specialties and medical forums, covering the Southeastern Region of China.</p><p><strong>Methods: </strong>A validated, web-based survey was distributed to 90 dental professionals in Eastern Coastal China, focusing on 9 key stages of RCT, preoperative preparation, intraoperative procedures, postoperative care, and clinician-patient communication. Responses were measured using a 7-point Likert scale to evaluate perceived discrepancies between education and clinical practice.</p><p><strong>Results: </strong>A total of 83 valid questionnaires were recovered, which revealed significant disparities between academic training and clinical demands. The survey showed that the specialized practitioners identified pronounced mismatches in RCT operative techniques and doctor-patient communication (P<.05). Participants aged ≤29 years demonstrated heightened awareness of discrepancies in disinfection protocols and temporary filling procedures (P<.05). Shanghai-trained practitioners reported fewer educational-clinical gaps across multiple procedural stages (P<.05). Notably, 82% of respondents rated comprehensive RCT implementation as more challenging than individual procedural components. Curriculum deficiencies were identified in treatment indication diagnostics (56.6% agreement) and communication training (43.4% agreement). Emerging technologies like virtual reality and augmented reality (VR and AR) showed minimal educational penetration (3.7% exposure rate). In the free-response section, qualitative feedback highlighted equipment accessibility issues (eg, thermal gutta-percha tools) and instructor-dependent learning outcomes.</p><p><strong>Conclusions: </strong>Structural discrepancies exist in Chinese preclinical RCT education, influenced by factors such as experience level, age, and region. These findings underscore the need for curriculum reforms, emphasizing competency-based training, enhanced simulation technologies, and standardized clinical protocols, particularly in areas like periodontal pathology and communication skills.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e65534"},"PeriodicalIF":3.2,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12306950/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julien Prégent, Van-Han-Alex Chung, Inès El Adib, Marie Désilets, Alexandre Hudon
{"title":"Applications of Artificial Intelligence in Psychiatry and Psychology Education: Scoping Review.","authors":"Julien Prégent, Van-Han-Alex Chung, Inès El Adib, Marie Désilets, Alexandre Hudon","doi":"10.2196/75238","DOIUrl":"10.2196/75238","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) is increasingly integrated into health care, including psychiatry and psychology. In educational contexts, AI offers new possibilities for enhancing clinical reasoning, personalizing content delivery, and supporting professional development. Despite this emerging interest, a comprehensive understanding of how AI is currently used in mental health education, and the challenges associated with its adoption, remains limited.</p><p><strong>Objective: </strong>This scoping review aimed to identify and characterize current applications of AI in the teaching and learning of psychiatry and psychology. It also sought to document reported facilitators of and barriers to the integration of AI within educational contexts.</p><p><strong>Methods: </strong>A systematic search was conducted across 6 electronic databases (MEDLINE, PubMed, Embase, PsycINFO, EBM Reviews, and Google Scholar) from inception to October 2024. The review followed Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. Studies were included if they focused on psychiatry or psychology, described the use of an AI tool, and discussed at least 1 facilitator of or barrier to its use in education. Data were extracted on study characteristics, population, AI application, educational outcomes, facilitators, and barriers. Study quality was appraised using several design-appropriate tools.</p><p><strong>Results: </strong>From 6219 records, 10 (0.2%) studies met the inclusion criteria. Eight categories of AI applications were identified: clinical decision support, educational content creation, therapeutic tools and mental health monitoring, administrative and research assistance, natural language processing (NLP), program/policy development, students' study aid, and professional development. Key facilitators included the availability of AI tools, positive learner attitudes, digital infrastructure, and time-saving features. Barriers included limited AI training, ethical concerns, lack of digital literacy, algorithmic opacity, and insufficient curricular integration. The overall methodological quality of included studies was moderate to high.</p><p><strong>Conclusions: </strong>AI is being used across a range of educational functions in psychiatry and psychology, from clinical training to assessment and administrative support. Although the potential for enhancing learning outcomes is clear, its successful integration requires addressing ethical, technical, and pedagogical barriers. Future efforts should focus on AI literacy, faculty development, and institutional policies to guide responsible and effective use. This review underscores the importance of interdisciplinary collaboration to ensure the safe, equitable, and meaningful adoption of AI in mental health education.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e75238"},"PeriodicalIF":3.2,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12340458/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144733670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gary Takahashi, Laurentius von Liechti, Ebrahim Tarshizi
{"title":"Quo vadis, \"AI-empowered Doctor\"?","authors":"Gary Takahashi, Laurentius von Liechti, Ebrahim Tarshizi","doi":"10.2196/70079","DOIUrl":"10.2196/70079","url":null,"abstract":"<p><strong>Unstructured: </strong>In the first decade of this century, physicians maintained considerable professional autonomy, enabling discretionary evaluation and implementation of these technologies according to individual practice requirements. The past decade, however, has witnessed significant restructuring of medical practice patterns, with most physicians transitioning to employed status. Concurrently, technological advances and other incentives drove the implementation of electronic systems into the clinic, which these physicians were compelled to integrate. Healthcare practitioners have now been introduced to applications based on Large Language Models, largely driven by AI developers as well as established EHR vendors eager to incorporate these innovations. While Generative AI assistance promises enhanced clinical efficiency and diagnostic precision, its rapid advancement may potentially redefine clinical provider roles and transform workflows, as it has already altered expectations of physician productivity, as well as introduced unprecedented liability considerations. Recognition of the input of physicians and other clinical stakeholders in this nascent stage of AI integration is essential. This requires a more comprehensive understanding of AI as a sophisticated clinical tool. Accordingly, we advocate for its systematic incorporation into standard medical curriculum.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12356520/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating Tailored Learning Experiences in Emergency Residency Training Through a Comparative Analysis of Mobile-Based Programs Versus Paper- and Web-Based Approaches: Feasibility Cross-Sectional Questionnaire Study.","authors":"Hsin-Ling Chen, Chen-Wei Lee, Chia-Wen Chang, Yi-Ching Chiu, Tzu-Yao Hung","doi":"10.2196/57216","DOIUrl":"10.2196/57216","url":null,"abstract":"<p><strong>Background: </strong>In the rapidly changing realm of medical education, Competency-Based Medical Education is emerging as a crucial framework to ensure residents acquire essential competencies efficiently. The advent of mobile-based platforms is seen as a pivotal shift from traditional educational methods, offering more dynamic and accessible learning options. This research aims to evaluate the effectiveness of mobile-based apps in emergency residency programs compared with the traditional paper- and web-based formats. Specifically, it focuses on analyzing their roles in facilitating immediate feedback, tracking educational progress, and personalizing the learning journey to meet the unique needs of each resident.</p><p><strong>Objective: </strong>This study aimed to compare mobile-based emergency residency training programs with paper- and web-based (programs regarding competency-based medical education core elements.</p><p><strong>Methods: </strong>A cross-sectional web-based survey (Nov 2022-Jan 2023) across 23 Taiwanese emergency residency sites used stratified random sampling, yielding 74 valid responses (49 educators, 16 residents, and 9 Residency Review Committee hosts). Data were analyzed using Mann-Whitney U test, chi-squared tests, and t tests.</p><p><strong>Results: </strong>MB programs (n=14) had fewer missed assessments (P=.02) and greater ease in identifying performance trends (P<.001) and required clinical scenarios (P<.001) compared with paper- and web-based programs (n=60). In addition, mobile-based programs enabled real-time visualization of performance trends and completion rates, facilitating individualized training (P<.001).</p><p><strong>Conclusions: </strong>In our nationwide pilot study, we observed that the mobile-based interface significantly enhances emergency residency training. It accomplishes this by providing rapid, customized updates, thereby increasing satisfaction and autonomous motivation among participants. This method is markedly different from traditional paper- or web-based approaches, which tend to be slower and less responsive. This difference is particularly evident in settings with limited resources. The mobile-based interface is a crucial tool in modernizing training, as it improves efficiency, boosts engagement, and facilitates collaboration. It plays an essential role in advancing Competency-Based Medical Education, especially concerning tailored learning experiences.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e57216"},"PeriodicalIF":3.2,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12288858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter Slinger, Maram Omar, Sarah Younus, Rebecca Charow, Michael Baxter, Craig Campbell, Meredith Giuliani, Jesse Goldmacher, Tharshini Jeyakumar, Inaara Karsan, Janet Papadakos, Tina Papadakos, Alexandra Jane Rotstein, May-Sann Yee, Asad Siddiqui, Marcos Silva Restrepo, Melody Zhang, David Wiljer
{"title":"Innovative Mobile App (CPD By the Minute) for Continuing Professional Development in Medicine: Multimethods Study.","authors":"Peter Slinger, Maram Omar, Sarah Younus, Rebecca Charow, Michael Baxter, Craig Campbell, Meredith Giuliani, Jesse Goldmacher, Tharshini Jeyakumar, Inaara Karsan, Janet Papadakos, Tina Papadakos, Alexandra Jane Rotstein, May-Sann Yee, Asad Siddiqui, Marcos Silva Restrepo, Melody Zhang, David Wiljer","doi":"10.2196/69443","DOIUrl":"10.2196/69443","url":null,"abstract":"<p><strong>Background: </strong>Many national medical governing bodies encourage physicians to engage in continuing professional development (CPD) activities to cultivate their knowledge and skills to ensure their clinical practice reflects the current standards and evidence base. However, physicians often encounter various barriers that hinder their participation in CPD programs, such as time constraints, a lack of centralized coordination, and limited opportunities for self-assessment. The literature has highlighted the strength of using question-based learning interventions to augment physician learning and further enable change in practice. CPD By the Minute (CPD-Min) is a smartphone-enabled web-based app that was developed to address self-assessment gaps and barriers to engagement in CPD activities.</p><p><strong>Objective: </strong>This study aimed to assess the app using four objectives: (1) engagement and use of the app throughout the study, (2) effectiveness of this tool as a CPD activity, (3) relevance of the disseminated information to physicians' practice, and (4) acceptability to physicians of this novel tool as an educational initiative.</p><p><strong>Methods: </strong>The CPD-Min app disseminated 2 multiple-choice questions (1-min each) each week with feedback and references. Participants included licensed staff physicians, fellows, and residents across Canada. A concurrent multimethods study was conducted, consisting of preintervention and postintervention surveys, semistructured interviews, and app analytics. Guided by the Reach, Effectiveness, Adoption, Implementation, and Maintenance framework, the qualitative data were analyzed deductively and inductively.</p><p><strong>Results: </strong>Of the 105 Canadian anesthesiologists participating in the study, 89 (84.8%) were staff physicians, 12 (11.4%) were fellows, and 4 (3.8%) were residents. Participants completed 110 questions each over the course of 52 weeks, with an average completion rate of 75% (SD 33%). In total, 40.9% (43/105) of participants answered >90% of the questions, including 15.2% (16/105) who completed all questions. Moreover, 69% (52/75) of participants reported the app to be an effective and valuable resource for their practice and to enhance continuous learning. Most participants (63/75, 84%) who completed the postsurveys reported that they would likely continue using the app as a CPD tool. These findings were further supported by the interview data. Three key themes were identified: the practical design of the novel educational app facilitates its adoption by clinicians, the app was perceived as a useful knowledge tool for continuous learning, and the app's low-stakes testing environment cultivated independent learning attitudes.</p><p><strong>Conclusions: </strong>The findings suggest the potential of the app to improve longitudinal assessments that promote lifelong learning among clinicians. The positive feedback and increased acceptance of the app supports","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e69443"},"PeriodicalIF":3.2,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12329386/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144691853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}