JMIR Medical Education最新文献

筛选
英文 中文
A Virtual Simulator to Improve Weight-Related Communication Skills for Health Care Professionals: Mixed Methods Pre-Post Pilot Feasibility Study. 虚拟模拟器提高医疗保健专业人员体重相关的沟通技巧:混合方法前后试点可行性研究。
IF 3.2
JMIR Medical Education Pub Date : 2025-08-15 DOI: 10.2196/65949
Fiona Quigley, Leona Ryan, Raymond Bond, Toni McAloon, Huiru Zheng, Anne Moorhead
{"title":"A Virtual Simulator to Improve Weight-Related Communication Skills for Health Care Professionals: Mixed Methods Pre-Post Pilot Feasibility Study.","authors":"Fiona Quigley, Leona Ryan, Raymond Bond, Toni McAloon, Huiru Zheng, Anne Moorhead","doi":"10.2196/65949","DOIUrl":"10.2196/65949","url":null,"abstract":"<p><strong>Background: </strong>Discussing weight remains a sensitive and often avoided topic in health care, despite rising prevalence of obesity and calls for earlier, more compassionate interventions. Many health care professionals report inadequate training and low confidence to discuss weight, while patients often describe feeling stigmatized or dismissed. Digital simulation offers a promising route to build communication skills through supporting repeatable and reflective practice in a safe space. VITAL-COMS (Virtual Training and Assessment for Communication Skills) is a novel simulation tool designed to support health care professionals in navigating weight-related conversations with greater understanding and skill.</p><p><strong>Objective: </strong>This study aimed to assess the potential of VITAL-COMS as a digital simulation training tool to improve weight-related communication skills among health care professionals.</p><p><strong>Methods: </strong>A mixed-method feasibility study was conducted online via Zoom (Zoom Video Communications) between January to July 2021, with UK-based nurses, doctors, and dietitians. The intervention comprised educational videos and 2 simulated patient scenarios with real-time verbal interaction. Pre- and posttraining self-assessments of communication skills and conversation length were collected. Participants also completed a feasibility questionnaire. Descriptive statistics were used to analyze the feasibility questionnaire, and open-ended feedback was analyzed using content analysis. Paired-samples t tests were used to assess changes in communication skills and conversation length before and post training.</p><p><strong>Results: </strong>In total, 31 participants completed the study. There was a statistically significant improvement in self-assessed communication skills following training (mean difference=3.9; 95% CI, 2.54-5.26; t30=-5.76, P=.001, Cohen d=1.03). Mean conversation length increased significantly in both scenarios: in the female patient scenario, from 3.73 (SD 1.36) to 6.08 (SD 2.26) minutes, with a mean difference of 2.35 minutes (95% CI, 1.71-2.99; t30=7.49, P=.001, Cohen d=1.34); and in the male scenario, from 3.61 (SD 1.12) to 5.65 (SD 1.76) minutes, a mean difference of 2.03 minutes (95% CI, 1.51-2.55; t30=8.03, P=.001, Cohen d=1.44). Participants rated the simulation positively, with 97% (95% CI 90%-100%) supporting wider use in health care and 84% (95% CI 71%-97%) reporting emotional engagement. Content analysis of feedback generated two themes: (1) adapting to this form of learning and (2) recognizing the potential of simulation to support reflective, skills-based training. A minority, 13% (95% CI 1%-25%) expressed a preference for alternative learning methods.</p><p><strong>Conclusions: </strong>VITAL-COMS was feasible to implement and acceptable to a diverse group of health care professionals. Participants demonstrated significant improvements in self-assessed communication skills ","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e65949"},"PeriodicalIF":3.2,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12356524/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144859721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Game-Based Assessment of Cognitive Abilities and Personality Characteristics for Surgical Resident Selection: A Preliminary Validation Study. 基于游戏的外科住院医师选择的认知能力和人格特征评估:初步验证研究。
IF 3.2
JMIR Medical Education Pub Date : 2025-08-15 DOI: 10.2196/72264
Noa Gazit, Gilad Ben-Gal, Ron Eliashar
{"title":"Game-Based Assessment of Cognitive Abilities and Personality Characteristics for Surgical Resident Selection: A Preliminary Validation Study.","authors":"Noa Gazit, Gilad Ben-Gal, Ron Eliashar","doi":"10.2196/72264","DOIUrl":"10.2196/72264","url":null,"abstract":"<p><strong>Background: </strong>Assessment of nontechnical attributes is important in selecting candidates for surgical training. Currently, these assessments are typically made based on ineffective methods, which have been shown to be poorly correlated with later performance.</p><p><strong>Objective: </strong>The study aimed to examine preliminary evidence regarding the use of game-based assessment (GBA) for assessing cognitive abilities and personality characteristics in candidates for surgical residencies.</p><p><strong>Methods: </strong>The study had 2 phases. In the first phase, a gamified test was developed to assess competencies relevant for surgical residents. Three games were chosen, assessing 14 competencies: planning, problem-solving, ingenuity, goal orientation, self-reflection, endurance, analytical thinking, learning ability, flexibility, concentration, conformity, multitasking, working memory, and precision. In the second phase, we collected data from 152 medical interns and 30 expert surgeons to evaluate the test's feasibility, acceptability, and validity for candidate selection.</p><p><strong>Results: </strong>Feedback from the interns and surgeons supported the relevance of the test for selection of surgical residents. In addition, analyses of the interns' performance data supported the appropriateness of the score calculation process and the internal structure of the test. Based on this data, the test showed good psychometric properties, including reliability (α=0.76) and discrimination (mean discrimination 0.39, SD 0.18). Correlations between test scores and background variables indicated significant correlations with gender, video game experience, and technical aptitude test scores (all P<.001).</p><p><strong>Conclusions: </strong>This study presents an innovative GBA testing cognitive abilities and personality characteristics. Preliminary evidence supports the validity, feasibility, and acceptability of the test for the selection of surgical residents. However, evidence for test-criterion relationships, particularly the GBA's ability to predict future surgical performance, remains to be established. Future longitudinal studies are necessary to confirm its utility as a selection tool.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e72264"},"PeriodicalIF":3.2,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12356604/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144859722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quo Vadis, AI-Empowered Doctor?
IF 3.2
JMIR Medical Education Pub Date : 2025-08-15 DOI: 10.2196/70079
Gary Takahashi, Laurentius von Liechti, Ebrahim Tarshizi
{"title":"Quo Vadis, AI-Empowered Doctor?","authors":"Gary Takahashi, Laurentius von Liechti, Ebrahim Tarshizi","doi":"10.2196/70079","DOIUrl":"10.2196/70079","url":null,"abstract":"<p><strong>Unlabelled: </strong>In the first decade of this century, physicians maintained considerable professional autonomy, enabling discretionary evaluation and implementation of new technologies according to individual practice requirements. The past decade, however, has witnessed significant restructuring of medical practice patterns in the United States, with most physicians transitioning to employed status. Concurrently, technological advances and other incentives drove the implementation of electronic systems into the clinic, which these physicians were compelled to integrate. Health care practitioners have now been introduced to applications based on large language models, largely driven by artificial intelligence (AI) developers as well as established electronic health record vendors eager to incorporate these innovations. Although generative AI assistance promises enhanced clinical efficiency and diagnostic precision, its rapid advancement may potentially redefine clinical provider roles and transform workflows, as it has already altered expectations of physician productivity, as well as introduced unprecedented liability considerations. Recognition of the input of physicians and other clinical stakeholders in this nascent stage of AI integration is essential. This requires a more comprehensive understanding of AI as a sophisticated clinical tool. Accordingly, we advocate for its systematic incorporation into standard medical curricula.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e70079"},"PeriodicalIF":3.2,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12356520/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144859723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guidelines for Rapport-Building in Telehealth Videoconferencing: Interprofessional e-Delphi Study. 远程医疗视频会议中建立融洽关系的指南:跨专业电子德尔福研究。
IF 3.2
JMIR Medical Education Pub Date : 2025-08-07 DOI: 10.2196/76260
Paula D Koppel, Jennie C De Gagne, Michelle Webb, Denise M Nepveux, Janelle Bludorn, Aviva Emmons, Paige S Randall, Neil S Prose
{"title":"Guidelines for Rapport-Building in Telehealth Videoconferencing: Interprofessional e-Delphi Study.","authors":"Paula D Koppel, Jennie C De Gagne, Michelle Webb, Denise M Nepveux, Janelle Bludorn, Aviva Emmons, Paige S Randall, Neil S Prose","doi":"10.2196/76260","DOIUrl":"10.2196/76260","url":null,"abstract":"<p><strong>Background: </strong>Telehealth training is increasingly incorporated into educational programs for health professions students and practicing clinicians. However, existing competencies and standards primarily address videoconferencing visit logistics, diagnostic modifications, and etiquette, often lacking comprehensive guidance on adapting interpersonal skills to convey empathy, cultural humility, and trust in web-based settings.</p><p><strong>Objective: </strong>This study aimed to establish consensus on the knowledge, skills, and attitudes required for health professions students and clinicians to build rapport with patients in telehealth videoconferencing visits and to identify teaching strategies that best support these educational goals.</p><p><strong>Methods: </strong>An e-Delphi study was conducted using a panel of 12 interprofessional experts in telehealth and telehealth education. Round 1 involved interviews, followed by anonymous surveys in rounds 2-4 to build consensus.</p><p><strong>Results: </strong>All 12 experts participated in rounds 1-3. In total, 19 themes related to rapport-building and 77 specific curriculum items were identified, all achieving the established level of consensus.</p><p><strong>Conclusions: </strong>Using a competency-based education framework, this study provides guidance for health professions educators, teaching clinicians, and students on how to adapt interpersonal skills for telehealth including detailed content related to knowledge, skills, attitudes, and teaching strategies. Future research is needed to test the feasibility, acceptability, and effectiveness of curricula based on these competencies and teaching strategies.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e76260"},"PeriodicalIF":3.2,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331129/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144800505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Prompt Engineering on the Performance of ChatGPT Variants Across Different Question Types in Medical Student Examinations. 提示工程对医学生考试中不同题型ChatGPT变体性能的影响
IF 3.2
JMIR Medical Education Pub Date : 2025-08-06 DOI: 10.2196/78320
Ming Yu Hsieh, Tzu-Ling Wang, Pen-Hua Su, Ming-Chih Chou
{"title":"Impact of Prompt Engineering on the Performance of ChatGPT Variants Across Different Question Types in Medical Student Examinations.","authors":"Ming Yu Hsieh, Tzu-Ling Wang, Pen-Hua Su, Ming-Chih Chou","doi":"10.2196/78320","DOIUrl":"10.2196/78320","url":null,"abstract":"<p><strong>Background: </strong>Large language models (LLMs) such as ChatGPT have shown promise in medical education assessments, but the comparative effects of prompt engineering across optimized variants and relative performance against medical students remain unclear.</p><p><strong>Objective: </strong>To systematically evaluate the impact of prompt engineering on five ChatGPT variants (GPT-3.5, GPT-4.0, GPT-4o, GPT-4o1mini, GPT-4o1) and benchmark their performance against fourth-year medical students in midterm and final examinations.</p><p><strong>Methods: </strong>A 100-item examination dataset covering multiple-choice, short-answer, clinical case analysis, and image-based questions was administered to each model under no-prompt and prompt-engineered conditions over five independent runs. Student cohort scores (n=143) were collected for comparison. Responses were scored using standardized rubrics, converted to percentages, and analyzed in SPSS Statistics 29 with paired t-tests and Cohen's d (p<0.05).</p><p><strong>Results: </strong>Baseline midterm scores ranged from 59.2% (GPT-3.5) to 94.1% (GPT-4o1); final scores from 55.0% to 92.4%. Fourth-year students averaged 89.4% (midterm) and 80.2% (final). Prompt engineering significantly improved GPT-3.5 (+10.6%, p<0.001) and GPT-4.0 (+3.2%, p=0.002) but yielded negligible gains for optimized variants (p=0.066-0.94). Optimized models matched or exceeded student performance on both exams.</p><p><strong>Conclusions: </strong>Prompt engineering enhances early-generation model performance, whereas advanced variants inherently achieve near-ceiling accuracy, surpassing medical students. As LLMs mature, emphasis should shift from prompt design to model selection, multimodal integration, and critical use of AI as a learning companion.</p><p><strong>Clinicaltrial: </strong></p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144795722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gamified Learning in a Virtual World for Undergraduate Emergency Radiology Education: Quasi-Experimental Study. 面向本科急诊放射学教育的虚拟世界游戏化学习:准实验研究
IF 3.2
JMIR Medical Education Pub Date : 2025-08-05 DOI: 10.2196/68518
Alba Virtudes Pérez-Baena, Teodoro Rudolphi-Solero, Rocío Lorenzo-Álvarez, Miguel José Ruiz-Gómez, Francisco Sendra-Portero
{"title":"Gamified Learning in a Virtual World for Undergraduate Emergency Radiology Education: Quasi-Experimental Study.","authors":"Alba Virtudes Pérez-Baena, Teodoro Rudolphi-Solero, Rocío Lorenzo-Álvarez, Miguel José Ruiz-Gómez, Francisco Sendra-Portero","doi":"10.2196/68518","DOIUrl":"10.2196/68518","url":null,"abstract":"<p><strong>Background: </strong>Emergency radiology is essential for future doctors, who will face urgent cases requiring radiologic diagnosis. Using virtual simulations, gamified clinical scenarios, and case-based learning enhances practical understanding, develops technical and communication skills, and fosters educational innovation.</p><p><strong>Objective: </strong>This study aimed to assess the feasibility of learning emergency radiology in the virtual world Second Life (Linden Lab) through a gamified experience by evaluating team performance in clinical case resolution, individual performance on seminar assessments, and students' perceptions of the activity.</p><p><strong>Methods: </strong>Teams of 3-4 final-year medical students, during a 2-week radiology clerkship, had access to 7 clinical cases in virtual clinical stations and were randomly assigned 2 to solve and submit. They later discussed the cases in a synchronous virtual meeting and attended an emergency radiology seminar. The experience was repeated over 2 consecutive years to assess reproducibility through comparison of learning outcomes and students' perceptions. Learning outcomes were evaluated through team-based case resolution and individual seminar assessments. Students' perceptions were gathered via a voluntary questionnaire including 5-point Likert scale items, cognitive load ratings, 10-point evaluations, and open-ended comments.</p><p><strong>Results: </strong>In total, 182 students participated in 2020-2021 and 170 in 2021-2022, demonstrating strong team-based case resolution skills with mean scores of 7.36 (SD 1.35) and 8.41 (SD 0.99), respectively (P<.001). The perception questionnaire had a 90.6% response rate. The highest cognitive load was observed in avatar editing (median 7, 95% CI 6.56-6.96). Case-solving cognitive load was significantly lower in 2021-2022 compared with 2020-2021 (median 6, 95% CI 5.69-6.21 vs 5.10-5.66; P<.001). The students rated the experience highly, with average scores exceeding 8.0 out of 10 across various aspects. Notably, the highest-rated aspects were the teaching staff (9.13, SD 1.15), cases (8.60, SD 1.31), project organization (8.42, SD 1.67), and virtual rooms (8.36, SD 1.62). The lowest-rated aspect was internet connectivity (6.68, SD 2.53). Despite the positive scores, all aspects were rated significantly lower in 2021-2022 compared with 2020-2021. These year-to-year comparisons in performance and perception support the reproducibility of the experience.</p><p><strong>Conclusions: </strong>This study demonstrates that a game-based learning experience in the Second Life virtual world, combining virtual clinical scenarios and team-based tasks, is feasible and reproducible within a radiology clerkship. Students showed strong performance in case resolution and rated the experience highly, within a playful context that integrated asynchronous and synchronous activities. Lower ratings in the second year may reflect contextual differenc","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e68518"},"PeriodicalIF":3.2,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12324901/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144790229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utility of Generative Artificial Intelligence for Japanese Medical Interview Training: Randomized Crossover Pilot Study. 生成式人工智能在日本医学面试培训中的应用:随机交叉试点研究。
IF 3.2
JMIR Medical Education Pub Date : 2025-08-01 DOI: 10.2196/77332
Takanobu Hirosawa, Masashi Yokose, Tetsu Sakamoto, Yukinori Harada, Kazuki Tokumasu, Kazuya Mizuta, Taro Shimizu
{"title":"Utility of Generative Artificial Intelligence for Japanese Medical Interview Training: Randomized Crossover Pilot Study.","authors":"Takanobu Hirosawa, Masashi Yokose, Tetsu Sakamoto, Yukinori Harada, Kazuki Tokumasu, Kazuya Mizuta, Taro Shimizu","doi":"10.2196/77332","DOIUrl":"10.2196/77332","url":null,"abstract":"<p><strong>Background: </strong>The medical interview remains a cornerstone of clinical training. There is growing interest in applying generative artificial intelligence (AI) in medical education, including medical interview training. However, its utility in culturally and linguistically specific contexts, including Japanese, remains underexplored. This study investigated the utility of generative AI for Japanese medical interview training.</p><p><strong>Objective: </strong>This pilot study aimed to evaluate the utility of generative AI as a tool for medical interview training by comparing its performance with that of traditional face-to-face training methods using a simulated patient.</p><p><strong>Methods: </strong>We conducted a randomized crossover pilot study involving 20 postgraduate year 1-2 physicians from a university hospital. Participants were randomly allocated into 2 groups. Group A began with an AI-based station on a case involving abdominal pain, followed by a traditional station with a standardized patient presenting chest pain. Group B followed the reverse order, starting with the traditional station for abdominal pain and subsequently within the AI-based station for the chest pain scenario. In the AI-based stations, participants interacted with a GPT-configured platform that simulated patient behaviors. GPTs are customizable versions of ChatGPT adapted for specific purposes. The traditional stations involved face-to-face interviews with a simulated patient. Both groups used identical, standardized case scenarios to ensure uniformity. Two independent evaluators, blinded to the study conditions, assessed participants' performances using 6 defined metrics: patient care and communication, history taking, physical examination, accuracy and clarity of transcription, clinical reasoning, and patient management. A 6-point Likert scale was used for scoring. The discrepancy between the evaluators was resolved through discussion. To ensure cultural and linguistic authenticity, all interviews and evaluations were conducted in Japanese.</p><p><strong>Results: </strong>AI-based stations scored lower across most categories, particularly in patient care and communication, than traditional stations (4.48 vs 4.95; P=.009). However, AI-based stations demonstrated comparable performance in clinical reasoning, with a nonsignificant difference (4.43 vs 4.85; P=.10).</p><p><strong>Conclusions: </strong>The comparable performance of generative AI in clinical reasoning highlights its potential as a complementary tool in medical interview training. One of its main advantages lies in enabling self-learning, allowing trainees to independently practice interviews without the need for simulated patients. Nonetheless, the lower scores in patient care and communication underline the importance of maintaining traditional methods that capture the nuances of human interaction. These findings support the adoption of hybrid training models that combine generative A","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e77332"},"PeriodicalIF":3.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12316404/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144765618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Evolution of Medical Student Competencies and Attitudes in Digital Health Between 2016 and 2022: Comparative Cross-Sectional Study. 2016年至2022年医学生数字化健康能力和态度的演变:比较横断面研究
IF 3.2
JMIR Medical Education Pub Date : 2025-07-31 DOI: 10.2196/67423
Paula Veikkolainen, Timo Tuovinen, Petri Kulmala, Erika Jarva, Jonna Juntunen, Anna-Maria Tuomikoski, Merja Männistö, Teemu Pihlajasalo, Jarmo Reponen
{"title":"The Evolution of Medical Student Competencies and Attitudes in Digital Health Between 2016 and 2022: Comparative Cross-Sectional Study.","authors":"Paula Veikkolainen, Timo Tuovinen, Petri Kulmala, Erika Jarva, Jonna Juntunen, Anna-Maria Tuomikoski, Merja Männistö, Teemu Pihlajasalo, Jarmo Reponen","doi":"10.2196/67423","DOIUrl":"10.2196/67423","url":null,"abstract":"<p><strong>Background: </strong>Modern health care systems worldwide are facing challenges, and digitalization is viewed as a way to strengthen health care globally. As health care systems become more digital, it is essential to assess health care professionals' competencies and skills to ensure they can adapt to new practices, policies, and workflows effectively.</p><p><strong>Objective: </strong>The aim of this study was to analyze how the attitudes, skills, and knowledge of medical students concerning digital health have shifted from 2016 to 2022 in connection with the development of the national health care information system architecture using the clinical adoption meta-model framework.</p><p><strong>Methods: </strong>The study population consisted of 5th-year medical students from the University of Oulu in Finland during 2016, 2021, and 2022. A survey questionnaire was administered comprising 7 background questions and 16 statements rated on a 5-point Likert scale assessing students' attitudes toward digital health and their self-perceived digital capabilities. The results were recategorized into a dichotomous scale. The statistical analysis used Pearson χ2 test. The Benjamini-Hochberg procedure was used for multiple variable correction.</p><p><strong>Results: </strong>The study included 215 medical students (n=45 in 2016, n=106 in 2021, and n=64 in 2022) with an overall response rate of 53% (43% in 2016, 74% in 2021, and 42% in 2022). Throughout 2016, 2021, and 2022, medical students maintained positive attitudes toward using patient-generated information and digital applications in patient care. Their self-perceived knowledge of the national patient portal significantly improved, with agreement increasing by 35 percentage points from 2016 to 2021 (P<.001) and this trend continued in 2022 (P<.001). However, their perceived skills in using electronic medical records did not show significant changes. Additionally, students' perceptions of the impact of digitalization on health promotion improved markedly from 2016 to 2021 (with agreement rising from 53% to 78%, P=.002) but declined notably again by 2022.</p><p><strong>Conclusions: </strong>Medical students' attitudes and self-perceived competencies have shifted over the years, potentially influenced by the national health information system architecture developments. However, these positive changes have not followed a completely linear trajectory. To address these gaps, educational institutions and policy makers should integrate more digital health topics into medical curricula and provide practical experience with digital technologies to keep professionals up-to-date with the evolving health care environment.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e67423"},"PeriodicalIF":3.2,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12313084/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144761577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Large Language Models for Simulated Psychotherapy Client Interactions: Development and Usability Study of Client101. 利用大型语言模型模拟心理治疗客户交互:Client101的开发和可用性研究。
IF 3.2
JMIR Medical Education Pub Date : 2025-07-31 DOI: 10.2196/68056
Daniel Cabrera Lozoya, Mike Conway, Edoardo Sebastiano De Duro, Simon D'Alfonso
{"title":"Leveraging Large Language Models for Simulated Psychotherapy Client Interactions: Development and Usability Study of Client101.","authors":"Daniel Cabrera Lozoya, Mike Conway, Edoardo Sebastiano De Duro, Simon D'Alfonso","doi":"10.2196/68056","DOIUrl":"10.2196/68056","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;In recent years, large language models (LLMs) have shown a remarkable ability to generate human-like text. One potential application of this capability is using LLMs to simulate clients in a mental health context. This research presents the development and evaluation of Client101, a web conversational platform featuring LLM-driven chatbots designed to simulate mental health clients.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Objective: &lt;/strong&gt;We aim to develop and test a web-based conversational psychotherapy training tool designed to closely resemble clients with mental health issues.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;We used GPT-4 and prompt engineering techniques to develop chatbots that simulate realistic client conversations. Two chatbots were created based on clinical vignette cases: one representing a person with depression and the other, a person with generalized anxiety disorder. A total of 16 mental health professionals were instructed to conduct single sessions with the chatbots using a cognitive behavioral therapy framework; a total of 15 sessions with the anxiety chatbot and 14 with the depression chatbot were completed. After each session, participants completed a 19-question survey assessing the chatbot's ability to simulate the mental health condition and its potential as a training tool. Additionally, we used the LIWC (Linguistic Inquiry and Word Count) tool to analyze the psycholinguistic features of the chatbot conversations related to anxiety and depression. These features were compared to those in a set of webchat psychotherapy sessions with human clients-42 sessions related to anxiety and 47 related to depression-using an independent samples t test.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;Participants' survey responses were predominantly positive regarding the chatbots' realism and portrayal of mental health conditions. For instance, 93% (14/15) considered that the chatbot provided a coherent and convincing narrative typical of someone with an anxiety condition. The statistical analysis of LIWC psycholinguistic features revealed significant differences between chatbot and human therapy transcripts for 3 of 8 anxiety-related features: negations (t56=4.03, P=.001), family (t56=-8.62, P=.001), and negative emotions (t56=-3.91, P=.002). The remaining 5 features-sadness, personal pronouns, present focus, social, and anger-did not show significant differences. For depression-related features, 4 of 9 showed significant differences: negative emotions (t60=-3.84, P=.003), feeling (t60=-6.40, P&lt;.001), health (t60=-4.13, P=.001), and illness (t60=-5.52, P&lt;.001). The other 5 features-sadness, anxiety, mental, first-person pronouns, and discrepancy-did not show statistically significant differences.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;This research underscores both the strengths and limitations of using GPT-4-powered chatbots as tools for psychotherapy training. Participant feedback suggests that the chatbots effectively portray mental health","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e68056"},"PeriodicalIF":3.2,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12312989/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144761575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resident Physician Recognition of Tachypnea in Clinical Simulation Videos in Japan: Cross-Sectional Study. 日本临床模拟视频中住院医师对呼吸急促的识别:横断面研究。
IF 3.2
JMIR Medical Education Pub Date : 2025-07-31 DOI: 10.2196/72640
Kiyoshi Shikino, Yuji Nishizaki, Sho Fukui, Koshi Kataoka, Daiki Yokokawa, Taro Shimizu, Yu Yamamoto, Kazuya Nagasaki, Hiroyuki Kobayashi, Yasuharu Tokuda
{"title":"Resident Physician Recognition of Tachypnea in Clinical Simulation Videos in Japan: Cross-Sectional Study.","authors":"Kiyoshi Shikino, Yuji Nishizaki, Sho Fukui, Koshi Kataoka, Daiki Yokokawa, Taro Shimizu, Yu Yamamoto, Kazuya Nagasaki, Hiroyuki Kobayashi, Yasuharu Tokuda","doi":"10.2196/72640","DOIUrl":"10.2196/72640","url":null,"abstract":"<p><strong>Background: </strong>Traditional assessments of clinical competence using multiple-choice questions (MCQs) have limitations in the evaluation of real-world diagnostic abilities. As such, recognizing non-verbal cues, like tachypnea, is crucial for accurate diagnosis and effective patient care.</p><p><strong>Objective: </strong>This study aimed to evaluate how detecting such cues impacts the clinical competence of resident physicians by using a clinical simulation video integrated into the General Medicine In-Training Examination (GM-ITE).</p><p><strong>Methods: </strong>This multicenter cross-sectional study enrolled first- and second-year resident physicians who participated in the GM-ITE 2022. Participants watched a 5-minute clinical simulation video depicting a patient with acute pulmonary thromboembolism, and subsequently answered diagnostic questions. Propensity score matching was applied to create balanced groups of resident physicians who detected tachypnea (ie, the detection group) and those who did not (ie, the non-detection group). After matching, we compared the GM-ITE scores and the proportion of correct clinical simulation video answers between the two groups. Subgroup analyses assessed the consistency between results.</p><p><strong>Results: </strong>In total, 5105 resident physicians were included, from which 959 pairs were identified after the clinical simulation video. Covariates were well balanced between the detection and non-detection groups (standardized mean difference <0.1 for all variables). Post-matching, the detection group achieved significantly higher GM-ITE scores (mean [SD], 47.6 [8.4]) than the non-detection group (mean [SD], 45.7 [8.1]; mean difference, 1.9; 95% CI, 1.1-2.6; P=.041). The proportion of correct clinical simulation video answers was also significantly higher in the detection group (39.2% vs 3.0%; mean difference, 36.2%; 95% CI, 32.8-39.4). Subgroup analyses confirmed consistent results across sex, postgraduate years, and age groups.</p><p><strong>Conclusions: </strong>Overall, this study revealed that detecting non-verbal cues like tachypnea significantly affects clinical competence, as evidenced by higher GM-ITE scores among resident physicians. Integrating video-based simulations into traditional MCQ examinations enhances the assessment of diagnostic skills by providing a more comprehensive evaluation of clinical abilities. Thus, recognizing non-verbal cues is crucial for clinical competence. Video-based simulations offer a valuable addition to traditional knowledge assessments by improving the diagnostic skills and preparedness of clinicians.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e72640"},"PeriodicalIF":3.2,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12313080/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144761576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信