{"title":"Topics and Trends of Health Informatics Education Research: Scientometric Analysis.","authors":"Qing Han","doi":"10.2196/58165","DOIUrl":"10.2196/58165","url":null,"abstract":"<p><strong>Background: </strong>Academic and educational institutions are making significant contributions toward training health informatics professionals. As research in health informatics education (HIE) continues to grow, it is useful to have a clearer understanding of this research field.</p><p><strong>Objective: </strong>This study aims to comprehensively explore the research topics and trends of HIE from 2014 to 2023. Specifically, it aims to explore (1) the trends of annual articles, (2) the prolific countries/regions, institutions, and publication sources, (3) the scientific collaborations of countries/regions and institutions, and (4) the major research themes and their developmental tendencies.</p><p><strong>Methods: </strong>Using publications in Web of Science Core Collection, a scientometric analysis of 575 articles related to the field of HIE was conducted. The structural topic model was used to identify topics discussed in the literature and to reveal the topic structure and evolutionary trends of HIE research.</p><p><strong>Results: </strong>Research interest in HIE has clearly increased from 2014 to 2023, and is continually expanding. The United States was found to be the most prolific country in this field. Harvard University was found to be the leading institution with the highest publication productivity. Journal of Medical Internet Research, Journal of The American Medical Informatics Association, and Applied Clinical Informatics were the top 3 journals with the highest articles in this field. Countries/regions and institutions having higher levels of international collaboration were more impactful. Research on HIE could be modeled into 7 topics related to the following areas: clinical (130/575, 22.6%), mobile application (123/575, 21.4%), consumer (99/575, 17.2%), teaching (61/575, 10.6%), public health (56/575, 9.7%), discipline (55/575, 9.6%), and nursing (51/575, 8.9%). The results clearly indicate the unique foci for each year, depicting the process of development for health informatics research.</p><p><strong>Conclusions: </strong>This is believed to be the first scientometric analysis exploring the research topics and trends in HIE. This study provides useful insights and implications, and the findings could be used as a guide for HIE contributors.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e58165"},"PeriodicalIF":3.2,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11669873/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fiatsogbe Dzuali, Kira Seiger, Roberto Novoa, Maria Aleshin, Joyce Teng, Jenna Lester, Roxana Daneshjou
{"title":"ChatGPT May Improve Access to Language-Concordant Care for Patients With Non-English Language Preferences.","authors":"Fiatsogbe Dzuali, Kira Seiger, Roberto Novoa, Maria Aleshin, Joyce Teng, Jenna Lester, Roxana Daneshjou","doi":"10.2196/51435","DOIUrl":"10.2196/51435","url":null,"abstract":"<p><strong>Unlabelled: </strong>This study evaluated the accuracy of ChatGPT in translating English patient education materials into Spanish, Mandarin, and Russian. While ChatGPT shows promise for translating Spanish and Russian medical information, Mandarin translations require further refinement, highlighting the need for careful review of AI-generated translations before clinical use.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e51435"},"PeriodicalIF":3.2,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11651640/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142829563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of a Computer-Based Morphological Analysis Method for Free-Text Responses in the General Medicine In-Training Examination: Algorithm Validation Study.","authors":"Daiki Yokokawa, Kiyoshi Shikino, Yuji Nishizaki, Sho Fukui, Yasuharu Tokuda","doi":"10.2196/52068","DOIUrl":"10.2196/52068","url":null,"abstract":"<p><strong>Background: </strong>The General Medicine In-Training Examination (GM-ITE) tests clinical knowledge in a 2-year postgraduate residency program in Japan. In the academic year 2021, as a domain of medical safety, the GM-ITE included questions regarding the diagnosis from medical history and physical findings through video viewing and the skills in presenting a case. Examinees watched a video or audio recording of a patient examination and provided free-text responses. However, the human cost of scoring free-text answers may limit the implementation of GM-ITE. A simple morphological analysis and word-matching model, thus, can be used to score free-text responses.</p><p><strong>Objective: </strong>This study aimed to compare human versus computer scoring of free-text responses and qualitatively evaluate the discrepancies between human- and machine-generated scores to assess the efficacy of machine scoring.</p><p><strong>Methods: </strong>After obtaining consent for participation in the study, the authors used text data from residents who voluntarily answered the GM-ITE patient reproduction video-based questions involving simulated patients. The GM-ITE used video-based questions to simulate a patient's consultation in the emergency room with a diagnosis of pulmonary embolism following a fracture. Residents provided statements for the case presentation. We obtained human-generated scores by collating the results of 2 independent scorers and machine-generated scores by converting the free-text responses into a word sequence through segmentation and morphological analysis and matching them with a prepared list of correct answers in 2022.</p><p><strong>Results: </strong>Of the 104 responses collected-63 for postgraduate year 1 and 41 for postgraduate year 2-39 cases remained for final analysis after excluding invalid responses. The authors found discrepancies between human and machine scoring in 14 questions (7.2%); some were due to shortcomings in machine scoring that could be resolved by maintaining a list of correct words and dictionaries, whereas others were due to human error.</p><p><strong>Conclusions: </strong>Machine scoring is comparable to human scoring. It requires a simple program and calibration but can potentially reduce the cost of scoring free-text responses.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e52068"},"PeriodicalIF":3.2,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11637224/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance of GPT-3.5 and GPT-4 on the Korean Pharmacist Licensing Examination: Comparison Study.","authors":"Hye Kyung Jin, EunYoung Kim","doi":"10.2196/57451","DOIUrl":"10.2196/57451","url":null,"abstract":"<p><strong>Background: </strong>ChatGPT, a recently developed artificial intelligence chatbot and a notable large language model, has demonstrated improved performance on medical field examinations. However, there is currently little research on its efficacy in languages other than English or in pharmacy-related examinations.</p><p><strong>Objective: </strong>This study aimed to evaluate the performance of GPT models on the Korean Pharmacist Licensing Examination (KPLE).</p><p><strong>Methods: </strong>We evaluated the percentage of correct answers provided by 2 different versions of ChatGPT (GPT-3.5 and GPT-4) for all multiple-choice single-answer KPLE questions, excluding image-based questions. In total, 320, 317, and 323 questions from the 2021, 2022, and 2023 KPLEs, respectively, were included in the final analysis, which consisted of 4 units: Biopharmacy, Industrial Pharmacy, Clinical and Practical Pharmacy, and Medical Health Legislation.</p><p><strong>Results: </strong>The 3-year average percentage of correct answers was 86.5% (830/960) for GPT-4 and 60.7% (583/960) for GPT-3.5. GPT model accuracy was highest in Biopharmacy (GPT-3.5 77/96, 80.2% in 2022; GPT-4 87/90, 96.7% in 2021) and lowest in Medical Health Legislation (GPT-3.5 8/20, 40% in 2022; GPT-4 12/20, 60% in 2022). Additionally, when comparing the performance of artificial intelligence with that of human participants, pharmacy students outperformed GPT-3.5 but not GPT-4.</p><p><strong>Conclusions: </strong>In the last 3 years, GPT models have performed very close to or exceeded the passing threshold for the KPLE. This study demonstrates the potential of large language models in the pharmacy domain; however, extensive research is needed to evaluate their reliability and ensure their secure application in pharmacy contexts due to several inherent challenges. Addressing these limitations could make GPT models more effective auxiliary tools for pharmacy education.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e57451"},"PeriodicalIF":3.2,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marie Wosny, Livia Maria Strasser, Simone Kraehenmann, Janna Hastings
{"title":"Practical Recommendations for Navigating Digital Tools in Hospitals: Qualitative Interview Study.","authors":"Marie Wosny, Livia Maria Strasser, Simone Kraehenmann, Janna Hastings","doi":"10.2196/60031","DOIUrl":"10.2196/60031","url":null,"abstract":"<p><strong>Background: </strong>The digitalization of health care organizations is an integral part of a clinician's daily life, making it vital for health care professionals (HCPs) to understand and effectively use digital tools in hospital settings. However, clinicians often express a lack of preparedness for their digital work environments. Particularly, new clinical end users, encompassing medical and nursing students, seasoned professionals transitioning to new health care environments, and experienced practitioners encountering new health care technologies, face critically intense learning periods, often with a lack of adequate time for learning digital tools, resulting in difficulties in integrating and adopting these digital tools into clinical practice.</p><p><strong>Objective: </strong>This study aims to comprehensively collect advice from experienced HCPs in Switzerland to guide new clinical end users on how to initiate their engagement with health ITs within hospital settings.</p><p><strong>Methods: </strong>We conducted qualitative interviews with 52 HCPs across Switzerland, representing 24 medical specialties from 14 hospitals. The interviews were transcribed verbatim and analyzed through inductive thematic analysis. Codes were developed iteratively, and themes and aggregated dimensions were refined through collaborative discussions.</p><p><strong>Results: </strong>Ten themes emerged from the interview data, namely (1) digital tool understanding, (2) peer-based learning strategies, (3) experimental learning approaches, (4) knowledge exchange and support, (5) training approaches, (6) proactive innovation, (7) an adaptive technology mindset, (8) critical thinking approaches, (9) dealing with emotions, and (10) empathy and human factors. Consequently, we devised 10 recommendations with specific advice to new clinical end users on how to approach new health care technologies, encompassing the following: take time to get to know and understand the tools you are working with; proactively ask experienced colleagues; simply try it out and practice; know where to get help and information; take sufficient training; embrace curiosity and pursue innovation; maintain an open and adaptable mindset; keep thinking critically and use your knowledge base; overcome your fears, and never lose the human and patient focus.</p><p><strong>Conclusions: </strong>Our study emphasized the importance of comprehensive training and learning approaches for health care technologies based on the advice and recommendations of experienced HCPs based in Swiss hospitals. Moreover, these recommendations have implications for medical educators and clinical instructors, providing advice on effective methods to instruct and support new end users, enabling them to use novel technologies proficiently. Therefore, we advocate for new clinical end users, health care institutions and clinical instructors, academic institutions and medical educators, and regulatory bodies to prior","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e60031"},"PeriodicalIF":3.2,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11635325/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142733224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance Comparison of Junior Residents and ChatGPT in the Objective Structured Clinical Examination (OSCE) for Medical History Taking and Documentation of Medical Records: Development and Usability Study.","authors":"Ting-Yun Huang, Pei Hsing Hsieh, Yung-Chun Chang","doi":"10.2196/59902","DOIUrl":"10.2196/59902","url":null,"abstract":"<p><strong>Background: </strong>This study explores the cutting-edge abilities of large language models (LLMs) such as ChatGPT in medical history taking and medical record documentation, with a focus on their practical effectiveness in clinical settings-an area vital for the progress of medical artificial intelligence.</p><p><strong>Objective: </strong>Our aim was to assess the capability of ChatGPT versions 3.5 and 4.0 in performing medical history taking and medical record documentation in simulated clinical environments. The study compared the performance of nonmedical individuals using ChatGPT with that of junior medical residents.</p><p><strong>Methods: </strong>A simulation involving standardized patients was designed to mimic authentic medical history-taking interactions. Five nonmedical participants used ChatGPT versions 3.5 and 4.0 to conduct medical histories and document medical records, mirroring the tasks performed by 5 junior residents in identical scenarios. A total of 10 diverse scenarios were examined.</p><p><strong>Results: </strong>Evaluation of the medical documentation created by laypersons with ChatGPT assistance and those created by junior residents was conducted by 2 senior emergency physicians using audio recordings and the final medical records. The assessment used the Objective Structured Clinical Examination benchmarks in Taiwan as a reference. ChatGPT-4.0 exhibited substantial enhancements over its predecessor and met or exceeded the performance of human counterparts in terms of both checklist and global assessment scores. Although the overall quality of human consultations remained higher, ChatGPT-4.0's proficiency in medical documentation was notably promising.</p><p><strong>Conclusions: </strong>The performance of ChatGPT 4.0 was on par with that of human participants in Objective Structured Clinical Examination evaluations, signifying its potential in medical history and medical record documentation. Despite this, the superiority of human consultations in terms of quality was evident. The study underscores both the promise and the current limitations of LLMs in the realm of clinical practice.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e59902"},"PeriodicalIF":3.2,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612517/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carl Ehrett, Sudeep Hegde, Kwame Andre, Dixizi Liu, Timothy Wilson
{"title":"Leveraging Open-Source Large Language Models for Data Augmentation in Hospital Staff Surveys: Mixed Methods Study.","authors":"Carl Ehrett, Sudeep Hegde, Kwame Andre, Dixizi Liu, Timothy Wilson","doi":"10.2196/51433","DOIUrl":"10.2196/51433","url":null,"abstract":"<p><strong>Background: </strong>Generative large language models (LLMs) have the potential to revolutionize medical education by generating tailored learning materials, enhancing teaching efficiency, and improving learner engagement. However, the application of LLMs in health care settings, particularly for augmenting small datasets in text classification tasks, remains underexplored, particularly for cost- and privacy-conscious applications that do not permit the use of third-party services such as OpenAI's ChatGPT.</p><p><strong>Objective: </strong>This study aims to explore the use of open-source LLMs, such as Large Language Model Meta AI (LLaMA) and Alpaca models, for data augmentation in a specific text classification task related to hospital staff surveys.</p><p><strong>Methods: </strong>The surveys were designed to elicit narratives of everyday adaptation by frontline radiology staff during the initial phase of the COVID-19 pandemic. A 2-step process of data augmentation and text classification was conducted. The study generated synthetic data similar to the survey reports using 4 generative LLMs for data augmentation. A different set of 3 classifier LLMs was then used to classify the augmented text for thematic categories. The study evaluated performance on the classification task.</p><p><strong>Results: </strong>The overall best-performing combination of LLMs, temperature, classifier, and number of synthetic data cases is via augmentation with LLaMA 7B at temperature 0.7 with 100 augments, using Robustly Optimized BERT Pretraining Approach (RoBERTa) for the classification task, achieving an average area under the receiver operating characteristic (AUC) curve of 0.87 (SD 0.02; ie, 1 SD). The results demonstrate that open-source LLMs can enhance text classifiers' performance for small datasets in health care contexts, providing promising pathways for improving medical education processes and patient care practices.</p><p><strong>Conclusions: </strong>The study demonstrates the value of data augmentation with open-source LLMs, highlights the importance of privacy and ethical considerations when using LLMs, and suggests future directions for research in this field.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e51433"},"PeriodicalIF":3.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11590755/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gry Mørk, Tore Bonsaksen, Ole Sønnik Larsen, Hans Martin Kunnikoff, Silje Stangeland Lie
{"title":"Virtual Reality Simulation in Undergraduate Health Care Education Programs: Usability Study.","authors":"Gry Mørk, Tore Bonsaksen, Ole Sønnik Larsen, Hans Martin Kunnikoff, Silje Stangeland Lie","doi":"10.2196/56844","DOIUrl":"10.2196/56844","url":null,"abstract":"<p><strong>Background: </strong>Virtual reality (VR) is increasingly being used in higher education for clinical skills training and role-playing among health care students. Using 360° videos in VR headsets, followed by peer debrief and group discussions, may strengthen students' social and emotional learning.</p><p><strong>Objective: </strong>This study aimed to explore student-perceived usability of VR simulation in three health care education programs in Norway.</p><p><strong>Methods: </strong>Students from one university participated in a VR simulation program. Of these, students in social education (n=74), nursing (n=45), and occupational therapy (n=27) completed a questionnaire asking about their perceptions of the usability of the VR simulation and the related learning activities. Differences between groups of students were examined with Pearson chi-square tests and with 1-way ANOVA. Qualitative content analysis was used to analyze data from open-ended questions.</p><p><strong>Results: </strong>The nursing students were most satisfied with the usability of the VR simulation, while the occupational therapy students were least satisfied. The nursing students had more often prior experience from using VR technology (60%), while occupational therapy students less often had prior experience (37%). Nevertheless, high mean scores indicated that the students experienced the VR simulation and the related learning activities as very useful. The results also showed that by using realistic scenarios in VR simulation, health care students can be prepared for complex clinical situations in a safe environment. Also, group debriefing sessions are a vital part of the learning process that enhance active involvement with peers.</p><p><strong>Conclusions: </strong>VR simulation has promise and potential as a pedagogical tool in health care education, especially for training soft skills relevant for clinical practice, such as communication, decision-making, time management, and critical thinking.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e56844"},"PeriodicalIF":3.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11615562/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using ChatGPT in Nursing: Scoping Review of Current Opinions.","authors":"You Zhou, Si-Jia Li, Xing-Yi Tang, Yi-Chen He, Hao-Ming Ma, Ao-Qi Wang, Run-Yuan Pei, Mei-Hua Piao","doi":"10.2196/54297","DOIUrl":"10.2196/54297","url":null,"abstract":"<p><strong>Background: </strong>Since the release of ChatGPT in November 2022, this emerging technology has garnered a lot of attention in various fields, and nursing is no exception. However, to date, no study has comprehensively summarized the status and opinions of using ChatGPT across different nursing fields.</p><p><strong>Objective: </strong>We aim to synthesize the status and opinions of using ChatGPT according to different nursing fields, as well as assess ChatGPT's strengths, weaknesses, and the potential impacts it may cause.</p><p><strong>Methods: </strong>This scoping review was conducted following the framework of Arksey and O'Malley and guided by the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews). A comprehensive literature research was conducted in 4 web-based databases (PubMed, Embase, Web of Science, and CINHAL) to identify studies reporting the opinions of using ChatGPT in nursing fields from 2022 to September 3, 2023. The references of the included studies were screened manually to further identify relevant studies. Two authors conducted studies screening, eligibility assessments, and data extraction independently.</p><p><strong>Results: </strong>A total of 30 studies were included. The United States (7 studies), Canada (5 studies), and China (4 studies) were countries with the most publications. In terms of fields of concern, studies mainly focused on \"ChatGPT and nursing education\" (20 studies), \"ChatGPT and nursing practice\" (10 studies), and \"ChatGPT and nursing research, writing, and examination\" (6 studies). Six studies addressed the use of ChatGPT in multiple nursing fields.</p><p><strong>Conclusions: </strong>As an emerging artificial intelligence technology, ChatGPT has great potential to revolutionize nursing education, nursing practice, and nursing research. However, researchers, institutions, and administrations still need to critically examine its accuracy, safety, and privacy, as well as academic misconduct and potential ethical issues that it may lead to before applying ChatGPT to practice.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e54297"},"PeriodicalIF":3.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11611787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Irene Carrillo, Ivana Skoumalová, Ireen Bruus, Victoria Klemm, Sofia Guerra-Paiva, Bojana Knežević, Augustina Jankauskiene, Dragana Jocic, Susanna Tella, Sandra C Buttigieg, Einav Srulovici, Andrea Madarasová Gecková, Kaja Põlluste, Reinhard Strametz, Paulo Sousa, Marina Odalovic, José Joaquín Mira
{"title":"Correction: Psychological Safety Competency Training During the Clinical Internship From the Perspective of Health Care Trainee Mentors in 11 Pan-European Countries: Mixed Methods Observational Study.","authors":"Irene Carrillo, Ivana Skoumalová, Ireen Bruus, Victoria Klemm, Sofia Guerra-Paiva, Bojana Knežević, Augustina Jankauskiene, Dragana Jocic, Susanna Tella, Sandra C Buttigieg, Einav Srulovici, Andrea Madarasová Gecková, Kaja Põlluste, Reinhard Strametz, Paulo Sousa, Marina Odalovic, José Joaquín Mira","doi":"10.2196/68503","DOIUrl":"10.2196/68503","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.2196/64125.].</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e68503"},"PeriodicalIF":3.2,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11632886/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142639999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}