JMIR Medical Education最新文献

筛选
英文 中文
Practical Recommendations for Navigating Digital Tools in Hospitals: Qualitative Interview Study. 医院数字化工具导航实用建议:定性访谈研究。
IF 3.2
JMIR Medical Education Pub Date : 2024-11-27 DOI: 10.2196/60031
Marie Wosny, Livia Maria Strasser, Simone Kraehenmann, Janna Hastings
{"title":"Practical Recommendations for Navigating Digital Tools in Hospitals: Qualitative Interview Study.","authors":"Marie Wosny, Livia Maria Strasser, Simone Kraehenmann, Janna Hastings","doi":"10.2196/60031","DOIUrl":"10.2196/60031","url":null,"abstract":"<p><strong>Background: </strong>The digitalization of health care organizations is an integral part of a clinician's daily life, making it vital for health care professionals (HCPs) to understand and effectively use digital tools in hospital settings. However, clinicians often express a lack of preparedness for their digital work environments. Particularly, new clinical end users, encompassing medical and nursing students, seasoned professionals transitioning to new health care environments, and experienced practitioners encountering new health care technologies, face critically intense learning periods, often with a lack of adequate time for learning digital tools, resulting in difficulties in integrating and adopting these digital tools into clinical practice.</p><p><strong>Objective: </strong>This study aims to comprehensively collect advice from experienced HCPs in Switzerland to guide new clinical end users on how to initiate their engagement with health ITs within hospital settings.</p><p><strong>Methods: </strong>We conducted qualitative interviews with 52 HCPs across Switzerland, representing 24 medical specialties from 14 hospitals. The interviews were transcribed verbatim and analyzed through inductive thematic analysis. Codes were developed iteratively, and themes and aggregated dimensions were refined through collaborative discussions.</p><p><strong>Results: </strong>Ten themes emerged from the interview data, namely (1) digital tool understanding, (2) peer-based learning strategies, (3) experimental learning approaches, (4) knowledge exchange and support, (5) training approaches, (6) proactive innovation, (7) an adaptive technology mindset, (8) critical thinking approaches, (9) dealing with emotions, and (10) empathy and human factors. Consequently, we devised 10 recommendations with specific advice to new clinical end users on how to approach new health care technologies, encompassing the following: take time to get to know and understand the tools you are working with; proactively ask experienced colleagues; simply try it out and practice; know where to get help and information; take sufficient training; embrace curiosity and pursue innovation; maintain an open and adaptable mindset; keep thinking critically and use your knowledge base; overcome your fears, and never lose the human and patient focus.</p><p><strong>Conclusions: </strong>Our study emphasized the importance of comprehensive training and learning approaches for health care technologies based on the advice and recommendations of experienced HCPs based in Swiss hospitals. Moreover, these recommendations have implications for medical educators and clinical instructors, providing advice on effective methods to instruct and support new end users, enabling them to use novel technologies proficiently. Therefore, we advocate for new clinical end users, health care institutions and clinical instructors, academic institutions and medical educators, and regulatory bodies to prior","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e60031"},"PeriodicalIF":3.2,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11635325/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142733224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Comparison of Junior Residents and ChatGPT in the Objective Structured Clinical Examination (OSCE) for Medical History Taking and Documentation of Medical Records: Development and Usability Study. 初级住院医师与ChatGPT在病历采集与病历记录的客观结构化临床检查(OSCE)中的表现比较:发展与可用性研究。
IF 3.2
JMIR Medical Education Pub Date : 2024-11-21 DOI: 10.2196/59902
Ting-Yun Huang, Pei Hsing Hsieh, Yung-Chun Chang
{"title":"Performance Comparison of Junior Residents and ChatGPT in the Objective Structured Clinical Examination (OSCE) for Medical History Taking and Documentation of Medical Records: Development and Usability Study.","authors":"Ting-Yun Huang, Pei Hsing Hsieh, Yung-Chun Chang","doi":"10.2196/59902","DOIUrl":"10.2196/59902","url":null,"abstract":"<p><strong>Background: </strong>This study explores the cutting-edge abilities of large language models (LLMs) such as ChatGPT in medical history taking and medical record documentation, with a focus on their practical effectiveness in clinical settings-an area vital for the progress of medical artificial intelligence.</p><p><strong>Objective: </strong>Our aim was to assess the capability of ChatGPT versions 3.5 and 4.0 in performing medical history taking and medical record documentation in simulated clinical environments. The study compared the performance of nonmedical individuals using ChatGPT with that of junior medical residents.</p><p><strong>Methods: </strong>A simulation involving standardized patients was designed to mimic authentic medical history-taking interactions. Five nonmedical participants used ChatGPT versions 3.5 and 4.0 to conduct medical histories and document medical records, mirroring the tasks performed by 5 junior residents in identical scenarios. A total of 10 diverse scenarios were examined.</p><p><strong>Results: </strong>Evaluation of the medical documentation created by laypersons with ChatGPT assistance and those created by junior residents was conducted by 2 senior emergency physicians using audio recordings and the final medical records. The assessment used the Objective Structured Clinical Examination benchmarks in Taiwan as a reference. ChatGPT-4.0 exhibited substantial enhancements over its predecessor and met or exceeded the performance of human counterparts in terms of both checklist and global assessment scores. Although the overall quality of human consultations remained higher, ChatGPT-4.0's proficiency in medical documentation was notably promising.</p><p><strong>Conclusions: </strong>The performance of ChatGPT 4.0 was on par with that of human participants in Objective Structured Clinical Examination evaluations, signifying its potential in medical history and medical record documentation. Despite this, the superiority of human consultations in terms of quality was evident. The study underscores both the promise and the current limitations of LLMs in the realm of clinical practice.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e59902"},"PeriodicalIF":3.2,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612517/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Open-Source Large Language Models for Data Augmentation in Hospital Staff Surveys: Mixed Methods Study. 在医院员工调查中利用开源大型语言模型进行数据扩充:混合方法研究。
IF 3.2
JMIR Medical Education Pub Date : 2024-11-19 DOI: 10.2196/51433
Carl Ehrett, Sudeep Hegde, Kwame Andre, Dixizi Liu, Timothy Wilson
{"title":"Leveraging Open-Source Large Language Models for Data Augmentation in Hospital Staff Surveys: Mixed Methods Study.","authors":"Carl Ehrett, Sudeep Hegde, Kwame Andre, Dixizi Liu, Timothy Wilson","doi":"10.2196/51433","DOIUrl":"10.2196/51433","url":null,"abstract":"<p><strong>Background: </strong>Generative large language models (LLMs) have the potential to revolutionize medical education by generating tailored learning materials, enhancing teaching efficiency, and improving learner engagement. However, the application of LLMs in health care settings, particularly for augmenting small datasets in text classification tasks, remains underexplored, particularly for cost- and privacy-conscious applications that do not permit the use of third-party services such as OpenAI's ChatGPT.</p><p><strong>Objective: </strong>This study aims to explore the use of open-source LLMs, such as Large Language Model Meta AI (LLaMA) and Alpaca models, for data augmentation in a specific text classification task related to hospital staff surveys.</p><p><strong>Methods: </strong>The surveys were designed to elicit narratives of everyday adaptation by frontline radiology staff during the initial phase of the COVID-19 pandemic. A 2-step process of data augmentation and text classification was conducted. The study generated synthetic data similar to the survey reports using 4 generative LLMs for data augmentation. A different set of 3 classifier LLMs was then used to classify the augmented text for thematic categories. The study evaluated performance on the classification task.</p><p><strong>Results: </strong>The overall best-performing combination of LLMs, temperature, classifier, and number of synthetic data cases is via augmentation with LLaMA 7B at temperature 0.7 with 100 augments, using Robustly Optimized BERT Pretraining Approach (RoBERTa) for the classification task, achieving an average area under the receiver operating characteristic (AUC) curve of 0.87 (SD 0.02; ie, 1 SD). The results demonstrate that open-source LLMs can enhance text classifiers' performance for small datasets in health care contexts, providing promising pathways for improving medical education processes and patient care practices.</p><p><strong>Conclusions: </strong>The study demonstrates the value of data augmentation with open-source LLMs, highlights the importance of privacy and ethical considerations when using LLMs, and suggests future directions for research in this field.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e51433"},"PeriodicalIF":3.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11590755/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Reality Simulation in Undergraduate Health Care Education Programs: Usability Study. 本科医疗保健教育课程中的虚拟现实模拟:可用性研究
IF 3.2
JMIR Medical Education Pub Date : 2024-11-19 DOI: 10.2196/56844
Gry Mørk, Tore Bonsaksen, Ole Sønnik Larsen, Hans Martin Kunnikoff, Silje Stangeland Lie
{"title":"Virtual Reality Simulation in Undergraduate Health Care Education Programs: Usability Study.","authors":"Gry Mørk, Tore Bonsaksen, Ole Sønnik Larsen, Hans Martin Kunnikoff, Silje Stangeland Lie","doi":"10.2196/56844","DOIUrl":"10.2196/56844","url":null,"abstract":"<p><strong>Background: </strong>Virtual reality (VR) is increasingly being used in higher education for clinical skills training and role-playing among health care students. Using 360° videos in VR headsets, followed by peer debrief and group discussions, may strengthen students' social and emotional learning.</p><p><strong>Objective: </strong>This study aimed to explore student-perceived usability of VR simulation in three health care education programs in Norway.</p><p><strong>Methods: </strong>Students from one university participated in a VR simulation program. Of these, students in social education (n=74), nursing (n=45), and occupational therapy (n=27) completed a questionnaire asking about their perceptions of the usability of the VR simulation and the related learning activities. Differences between groups of students were examined with Pearson chi-square tests and with 1-way ANOVA. Qualitative content analysis was used to analyze data from open-ended questions.</p><p><strong>Results: </strong>The nursing students were most satisfied with the usability of the VR simulation, while the occupational therapy students were least satisfied. The nursing students had more often prior experience from using VR technology (60%), while occupational therapy students less often had prior experience (37%). Nevertheless, high mean scores indicated that the students experienced the VR simulation and the related learning activities as very useful. The results also showed that by using realistic scenarios in VR simulation, health care students can be prepared for complex clinical situations in a safe environment. Also, group debriefing sessions are a vital part of the learning process that enhance active involvement with peers.</p><p><strong>Conclusions: </strong>VR simulation has promise and potential as a pedagogical tool in health care education, especially for training soft skills relevant for clinical practice, such as communication, decision-making, time management, and critical thinking.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e56844"},"PeriodicalIF":3.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11615562/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using ChatGPT in Nursing: Scoping Review of Current Opinions. 在护理中使用ChatGPT:当前意见的范围审查。
IF 3.2
JMIR Medical Education Pub Date : 2024-11-19 DOI: 10.2196/54297
You Zhou, Si-Jia Li, Xing-Yi Tang, Yi-Chen He, Hao-Ming Ma, Ao-Qi Wang, Run-Yuan Pei, Mei-Hua Piao
{"title":"Using ChatGPT in Nursing: Scoping Review of Current Opinions.","authors":"You Zhou, Si-Jia Li, Xing-Yi Tang, Yi-Chen He, Hao-Ming Ma, Ao-Qi Wang, Run-Yuan Pei, Mei-Hua Piao","doi":"10.2196/54297","DOIUrl":"10.2196/54297","url":null,"abstract":"<p><strong>Background: </strong>Since the release of ChatGPT in November 2022, this emerging technology has garnered a lot of attention in various fields, and nursing is no exception. However, to date, no study has comprehensively summarized the status and opinions of using ChatGPT across different nursing fields.</p><p><strong>Objective: </strong>We aim to synthesize the status and opinions of using ChatGPT according to different nursing fields, as well as assess ChatGPT's strengths, weaknesses, and the potential impacts it may cause.</p><p><strong>Methods: </strong>This scoping review was conducted following the framework of Arksey and O'Malley and guided by the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews). A comprehensive literature research was conducted in 4 web-based databases (PubMed, Embase, Web of Science, and CINHAL) to identify studies reporting the opinions of using ChatGPT in nursing fields from 2022 to September 3, 2023. The references of the included studies were screened manually to further identify relevant studies. Two authors conducted studies screening, eligibility assessments, and data extraction independently.</p><p><strong>Results: </strong>A total of 30 studies were included. The United States (7 studies), Canada (5 studies), and China (4 studies) were countries with the most publications. In terms of fields of concern, studies mainly focused on \"ChatGPT and nursing education\" (20 studies), \"ChatGPT and nursing practice\" (10 studies), and \"ChatGPT and nursing research, writing, and examination\" (6 studies). Six studies addressed the use of ChatGPT in multiple nursing fields.</p><p><strong>Conclusions: </strong>As an emerging artificial intelligence technology, ChatGPT has great potential to revolutionize nursing education, nursing practice, and nursing research. However, researchers, institutions, and administrations still need to critically examine its accuracy, safety, and privacy, as well as academic misconduct and potential ethical issues that it may lead to before applying ChatGPT to practice.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e54297"},"PeriodicalIF":3.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11611787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Psychological Safety Competency Training During the Clinical Internship From the Perspective of Health Care Trainee Mentors in 11 Pan-European Countries: Mixed Methods Observational Study. 更正:从 11 个泛欧国家医疗保健实习生导师的角度看临床实习期间的心理安全能力培训:混合方法观察研究。
IF 3.2
JMIR Medical Education Pub Date : 2024-11-15 DOI: 10.2196/68503
Irene Carrillo, Ivana Skoumalová, Ireen Bruus, Victoria Klemm, Sofia Guerra-Paiva, Bojana Knežević, Augustina Jankauskiene, Dragana Jocic, Susanna Tella, Sandra C Buttigieg, Einav Srulovici, Andrea Madarasová Gecková, Kaja Põlluste, Reinhard Strametz, Paulo Sousa, Marina Odalovic, José Joaquín Mira
{"title":"Correction: Psychological Safety Competency Training During the Clinical Internship From the Perspective of Health Care Trainee Mentors in 11 Pan-European Countries: Mixed Methods Observational Study.","authors":"Irene Carrillo, Ivana Skoumalová, Ireen Bruus, Victoria Klemm, Sofia Guerra-Paiva, Bojana Knežević, Augustina Jankauskiene, Dragana Jocic, Susanna Tella, Sandra C Buttigieg, Einav Srulovici, Andrea Madarasová Gecková, Kaja Põlluste, Reinhard Strametz, Paulo Sousa, Marina Odalovic, José Joaquín Mira","doi":"10.2196/68503","DOIUrl":"10.2196/68503","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.2196/64125.].</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e68503"},"PeriodicalIF":3.2,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11632886/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142639999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating AI Competence in Specialized Medicine: Comparative Analysis of ChatGPT and Neurologists in a Neurology Specialist Examination in Spain. 评估人工智能在专科医学中的能力:西班牙神经病学专家考试中ChatGPT和神经科医生的比较分析。
IF 3.2
JMIR Medical Education Pub Date : 2024-11-14 DOI: 10.2196/56762
Pablo Ros-Arlanzón, Angel Perez-Sempere
{"title":"Evaluating AI Competence in Specialized Medicine: Comparative Analysis of ChatGPT and Neurologists in a Neurology Specialist Examination in Spain.","authors":"Pablo Ros-Arlanzón, Angel Perez-Sempere","doi":"10.2196/56762","DOIUrl":"10.2196/56762","url":null,"abstract":"<p><strong>Background: </strong>With the rapid advancement of artificial intelligence (AI) in various fields, evaluating its application in specialized medical contexts becomes crucial. ChatGPT, a large language model developed by OpenAI, has shown potential in diverse applications, including medicine.</p><p><strong>Objective: </strong>This study aims to compare the performance of ChatGPT with that of attending neurologists in a real neurology specialist examination conducted in the Valencian Community, Spain, assessing the AI's capabilities and limitations in medical knowledge.</p><p><strong>Methods: </strong>We conducted a comparative analysis using the 2022 neurology specialist examination results from 120 neurologists and responses generated by ChatGPT versions 3.5 and 4. The examination consisted of 80 multiple-choice questions, with a focus on clinical neurology and health legislation. Questions were classified according to Bloom's Taxonomy. Statistical analysis of performance, including the κ coefficient for response consistency, was performed.</p><p><strong>Results: </strong>Human participants exhibited a median score of 5.91 (IQR: 4.93-6.76), with 32 neurologists failing to pass. ChatGPT-3.5 ranked 116th out of 122, answering 54.5% of questions correctly (score 3.94). ChatGPT-4 showed marked improvement, ranking 17th with 81.8% of correct answers (score 7.57), surpassing several human specialists. No significant variations were observed in the performance on lower-order questions versus higher-order questions. Additionally, ChatGPT-4 demonstrated increased interrater reliability, as reflected by a higher κ coefficient of 0.73, compared to ChatGPT-3.5's coefficient of 0.69.</p><p><strong>Conclusions: </strong>This study underscores the evolving capabilities of AI in medical knowledge assessment, particularly in specialized fields. ChatGPT-4's performance, outperforming the median score of human participants in a rigorous neurology examination, represents a significant milestone in AI development, suggesting its potential as an effective tool in specialized medical education and assessment.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e56762"},"PeriodicalIF":3.2,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11611784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Ophthalmic Knowledge Assessment Program Scores and Surgical Volume on Subspecialty Fellowship Application in Ophthalmology Residency: Retrospective Cohort Study. 眼科知识评估项目得分和手术量对眼科住院医师亚专科奖学金申请的影响:回顾性队列研究。
IF 3.2
JMIR Medical Education Pub Date : 2024-11-13 DOI: 10.2196/60940
Amanda Kay Hertel, Radwan S Ajlan
{"title":"Impact of Ophthalmic Knowledge Assessment Program Scores and Surgical Volume on Subspecialty Fellowship Application in Ophthalmology Residency: Retrospective Cohort Study.","authors":"Amanda Kay Hertel, Radwan S Ajlan","doi":"10.2196/60940","DOIUrl":"10.2196/60940","url":null,"abstract":"<p><strong>Background: </strong>Ophthalmology residents take the Ophthalmic Knowledge Assessment Program (OKAP) exam annually, which provides percentile rank for multiple categories and the total score. In addition, ophthalmology residency training programs have multiple subspecialty rotations with defined minimum procedure requirements. However, residents' surgical volumes vary, with some residents exceeding their peers in specific subspecialty rotations.</p><p><strong>Objective: </strong>This study aims to identify if there is a difference in OKAP examination scores and surgical volume exposure during ophthalmology residency training between nonfellowship and fellowship applicants and among various subspecialties.</p><p><strong>Methods: </strong>A retrospective review of OKAP scores and surgical procedure numbers of graduating residents in an accredited academic ophthalmology residency program in the Midwest United States was conducted. Data were collected from 2012 to 2022.</p><p><strong>Results: </strong>A total of 31 residents were identified. Most residents decided to pursue fellowship training upon graduation (20/31, 65% residents), and the rest chose to practice comprehensive ophthalmology (11/31, 35% residents). A total of 18/31 residents had OKAP score reports available. The fellowship group outperformed the nonfellowship group in multiple subsections and the total exam (P=.04). Those pursuing fellowship training in glaucoma performed higher on the Glaucoma section (P=.004) and the total exam (P=.005). Residents pursuing cornea performed higher on nearly all subsections, including External Disease and Cornea (P=.02) and the total exam (P=.007). The majority of the surgical volume exposure was identical between fellowship and nonfellowship groups. Those who pursued glaucoma fellowship performed more glaucoma filtering and shunting procedures (P=.03). Residents going into pediatrics fellowship were primary surgeons in more strabismus cases (P=.01), assisted in fewer strabismus cases (P<.001), and had no difference in the total number of strabismus surgeries.</p><p><strong>Conclusions: </strong>In our program, residents pursuing fellowship training had higher OKAP scores on multiple sections and the total exam. There was no significant difference in the overall surgical volume averages between fellowship and nonfellowship groups, but few differences existed in subspecialty procedures among fellowship applicants. Larger multicenter studies are needed to clarify the relationship between OKAP scores and ophthalmology fellowship decisions nationwide.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e60940"},"PeriodicalIF":3.2,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11611791/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging the Electronic Health Record to Measure Resident Clinical Experiences and Identify Training Gaps: Development and Usability Study. 利用电子健康记录测量住院医师的临床经验并找出培训差距:开发和可用性研究。
IF 3.2
JMIR Medical Education Pub Date : 2024-11-06 DOI: 10.2196/53337
Vasudha L Bhavaraju, Sarada Panchanathan, Brigham C Willis, Pamela Garcia-Filion
{"title":"Leveraging the Electronic Health Record to Measure Resident Clinical Experiences and Identify Training Gaps: Development and Usability Study.","authors":"Vasudha L Bhavaraju, Sarada Panchanathan, Brigham C Willis, Pamela Garcia-Filion","doi":"10.2196/53337","DOIUrl":"10.2196/53337","url":null,"abstract":"<p><strong>Background: </strong>Competence-based medical education requires robust data to link competence with clinical experiences. The SARS-CoV-2 (COVID-19) pandemic abruptly altered the standard trajectory of clinical exposure in medical training programs. Residency program directors were tasked with identifying and addressing the resultant gaps in each trainee's experiences using existing tools.</p><p><strong>Objective: </strong>This study aims to demonstrate a feasible and efficient method to capture electronic health record (EHR) data that measure the volume and variety of pediatric resident clinical experiences from a continuity clinic; generate individual-, class-, and graduate-level benchmark data; and create a visualization for learners to quickly identify gaps in clinical experiences.</p><p><strong>Methods: </strong>This pilot was conducted in a large, urban pediatric residency program from 2016 to 2022. Through consensus, 5 pediatric faculty identified diagnostic groups that pediatric residents should see to be competent in outpatient pediatrics. Information technology consultants used International Classification of Diseases, Tenth Revision (ICD-10) codes corresponding with each diagnostic group to extract EHR patient encounter data as an indicator of exposure to the specific diagnosis. The frequency (volume) and diagnosis types (variety) seen by active residents (classes of 2020-2022) were compared with class and graduated resident (classes of 2016-2019) averages. These data were converted to percentages and translated to a radar chart visualization for residents to quickly compare their current clinical experiences with peers and graduates. Residents were surveyed on the use of these data and the visualization to identify training gaps.</p><p><strong>Results: </strong>Patient encounter data about clinical experiences for 102 residents (N=52 graduates) were extracted. Active residents (n=50) received data reports with radar graphs biannually: 3 for the classes of 2020 and 2021 and 2 for the class of 2022. Radar charts distinctly demonstrated gaps in diagnoses exposure compared with classmates and graduates. Residents found the visualization useful in setting clinical and learning goals.</p><p><strong>Conclusions: </strong>This pilot describes an innovative method of capturing and presenting data about resident clinical experiences, compared with peer and graduate benchmarks, to identify learning gaps that may result from disruptions or modifications in medical training. This methodology can be aggregated across specialties and institutions and potentially inform competence-based medical education.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e53337"},"PeriodicalIF":3.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11559912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT-4 Omni Performance in USMLE Disciplines and Clinical Skills: Comparative Analysis. ChatGPT-4 Omni 在 USMLE 学科和临床技能中的表现:比较分析。
IF 3.2
JMIR Medical Education Pub Date : 2024-11-06 DOI: 10.2196/63430
Brenton T Bicknell, Danner Butler, Sydney Whalen, James Ricks, Cory J Dixon, Abigail B Clark, Olivia Spaedy, Adam Skelton, Neel Edupuganti, Lance Dzubinski, Hudson Tate, Garrett Dyess, Brenessa Lindeman, Lisa Soleymani Lehmann
{"title":"ChatGPT-4 Omni Performance in USMLE Disciplines and Clinical Skills: Comparative Analysis.","authors":"Brenton T Bicknell, Danner Butler, Sydney Whalen, James Ricks, Cory J Dixon, Abigail B Clark, Olivia Spaedy, Adam Skelton, Neel Edupuganti, Lance Dzubinski, Hudson Tate, Garrett Dyess, Brenessa Lindeman, Lisa Soleymani Lehmann","doi":"10.2196/63430","DOIUrl":"10.2196/63430","url":null,"abstract":"<p><strong>Background: </strong>Recent studies, including those by the National Board of Medical Examiners, have highlighted the remarkable capabilities of recent large language models (LLMs) such as ChatGPT in passing the United States Medical Licensing Examination (USMLE). However, there is a gap in detailed analysis of LLM performance in specific medical content areas, thus limiting an assessment of their potential utility in medical education.</p><p><strong>Objective: </strong>This study aimed to assess and compare the accuracy of successive ChatGPT versions (GPT-3.5, GPT-4, and GPT-4 Omni) in USMLE disciplines, clinical clerkships, and the clinical skills of diagnostics and management.</p><p><strong>Methods: </strong>This study used 750 clinical vignette-based multiple-choice questions to characterize the performance of successive ChatGPT versions (ChatGPT 3.5 [GPT-3.5], ChatGPT 4 [GPT-4], and ChatGPT 4 Omni [GPT-4o]) across USMLE disciplines, clinical clerkships, and in clinical skills (diagnostics and management). Accuracy was assessed using a standardized protocol, with statistical analyses conducted to compare the models' performances.</p><p><strong>Results: </strong>GPT-4o achieved the highest accuracy across 750 multiple-choice questions at 90.4%, outperforming GPT-4 and GPT-3.5, which scored 81.1% and 60.0%, respectively. GPT-4o's highest performances were in social sciences (95.5%), behavioral and neuroscience (94.2%), and pharmacology (93.2%). In clinical skills, GPT-4o's diagnostic accuracy was 92.7% and management accuracy was 88.8%, significantly higher than its predecessors. Notably, both GPT-4o and GPT-4 significantly outperformed the medical student average accuracy of 59.3% (95% CI 58.3-60.3).</p><p><strong>Conclusions: </strong>GPT-4o's performance in USMLE disciplines, clinical clerkships, and clinical skills indicates substantial improvements over its predecessors, suggesting significant potential for the use of this technology as an educational aid for medical students. These findings underscore the need for careful consideration when integrating LLMs into medical education, emphasizing the importance of structured curricula to guide their appropriate use and the need for ongoing critical analyses to ensure their reliability and effectiveness.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e63430"},"PeriodicalIF":3.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11611793/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信