{"title":"Training satisfaction and future employment consideration among physician and nursing trainees at rural Veterans Affairs facilities in the United States during COVID-19: a time-series before and after study","authors":"Heather Northcraft, Tiffany Radcliff, Anne Reid Griffin, Jia Bai, Aram Dobalian","doi":"10.3352/jeehp.2024.21.25","DOIUrl":"10.3352/jeehp.2024.21.25","url":null,"abstract":"<p><strong>Purpose: </strong>The coronavirus disease 2019 (COVID-19) pandemic limited healthcare professional education and training opportunities in rural communities. Because the US Department of Veterans Affairs (VA) has robust programs to train clinicians in the United States, this study examined VA trainee perspectives regarding pandemic-related training in rural and urban areas and interest in future employment with the VA.</p><p><strong>Methods: </strong>Survey responses were collected nationally from VA physicians and nursing trainees before and after COVID-19 (2018 to 2021). Logistic regression models were used to test the association between pandemic timing (pre-pandemic or pandemic), trainee program (physician or nurse), and the interaction of trainee pandemic timing and program on VA trainee satisfaction and trainee likelihood to consider future VA employment in rural and urban areas.</p><p><strong>Results: </strong>While physician trainees at urban facilities reported decreases in overall training satisfaction and corresponding decreases in the likelihood of considering future VA employment from pre-pandemic to pandemic, rural physician trainees showed no changes in either outcome. In contrast, while nursing trainees at both urban and rural sites had decreases in training satisfaction associated with the pandemic, there was no corresponding effect on the likelihood of future employment by nurses at either urban or rural VA sites.</p><p><strong>Conclusion: </strong>The study’s findings suggest differences in the training experiences of physicians and nurses at rural sites, as well as between physician trainees at urban and rural sites. Understanding these nuances can inform the development of targeted approaches to address the ongoing provider shortages that rural communities in the United States are facing.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"25"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11528153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142356105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effectiveness of ChatGPT-4o in developing continuing professional development plans for graduate radiographers: a descriptive study","authors":"Minh Chau, Elio Stefan Arruzza, Kelly Spuur","doi":"10.3352/jeehp.2024.21.34","DOIUrl":"10.3352/jeehp.2024.21.34","url":null,"abstract":"<p><strong>Purpose: </strong>This study evaluates the use of ChatGPT-4o in creating tailored continuing professional development (CPD) plans for radiography students, addressing the challenge of aligning CPD with Medical Radiation Practice Board of Australia (MRPBA) requirements. We hypothesized that ChatGPT-4o could support students in CPD planning while meeting regulatory standards.</p><p><strong>Methods: </strong>A descriptive, experimental design was used to generate 3 unique CPD plans using ChatGPT-4o, each tailored to hypothetical graduate radiographers in varied clinical settings. Each plan followed MRPBA guidelines, focusing on computed tomography specialization by the second year. Three MRPBA-registered academics assessed the plans using criteria of appropriateness, timeliness, relevance, reflection, and completeness from October 2024 to November 2024. Ratings underwent analysis using the Friedman test and intraclass correlation coefficient (ICC) to measure consistency among evaluators.</p><p><strong>Results: </strong>ChatGPT-4o generated CPD plans generally adhered to regulatory standards across scenarios. The Friedman test indicated no significant differences among raters (P=0.420, 0.761, and 0.807 for each scenario), suggesting consistent scores within scenarios. However, ICC values were low (–0.96, 0.41, and 0.058 for scenarios 1, 2, and 3), revealing variability among raters, particularly in timeliness and completeness criteria, suggesting limitations in the ChatGPT4o’s ability to address individualized and context-specific needs.</p><p><strong>Conclusion: </strong>ChatGPT-4o demonstrates the potential to ease the cognitive demands of CPD planning, offering structured support in CPD development. However, human oversight remains essential to ensure plans are contextually relevant and deeply reflective. Future research should focus on enhancing artificial intelligence’s personalization for CPD evaluation, highlighting ChatGPT-4o’s potential and limitations as a tool in professional education.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"34"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11637979/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142648699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new performance evaluation indicator for the LEE Jong-wook Fellowship Program of Korea Foundation for International Healthcare to better assess its long-term educational impacts: a Delphi study.","authors":"Minkyung Oh, Bo Young Yoon","doi":"10.3352/jeehp.2024.21.27","DOIUrl":"10.3352/jeehp.2024.21.27","url":null,"abstract":"<p><strong>Purpose: </strong>The Dr. LEE Jong-wook Fellowship Program, established by the Korea Foundation for International Healthcare (KOFIH), aims to strengthen healthcare capacity in partner countries. The aim of the study was to develop new performance evaluation indicators for the program to better assess long-term educational impact across various courses and professional roles.</p><p><strong>Methods: </strong>A 3-stage process was employed. First, a literature review of established evaluation models (Kirkpatrick’s 4 levels, context/input/process/product evaluation model, Organization for Economic Cooperation and Development Assistance Committee criteria) was conducted to devise evaluation criteria. Second, these criteria were validated via a 2-round Delphi survey with 18 experts in training projects from May 2021 to June 2021. Third, the relative importance of the evaluation criteria was determined using the analytic hierarchy process (AHP), calculating weights and ensuring consistency through the consistency index and consistency ratio (CR), with CR values below 0.1 indicating acceptable consistency.</p><p><strong>Results: </strong>The literature review led to a combined evaluation model, resulting in 4 evaluation areas, 20 items, and 92 indicators. The Delphi surveys confirmed the validity of these indicators, with content validity ratio values exceeding 0.444. The AHP analysis assigned weights to each indicator, and CR values below 0.1 indicated consistency. The final set of evaluation indicators was confirmed through a workshop with KOFIH and adopted as the new evaluation tool.</p><p><strong>Conclusion: </strong>The developed evaluation framework provides a comprehensive tool for assessing the long-term outcomes of the Dr. LEE Jong-wook Fellowship Program. It enhances evaluation capabilities and supports improvements in the training program’s effectiveness and international healthcare collaboration.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"27"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11535579/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee
{"title":"Challenges and potential improvements in the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019) derived through meta-evaluation: a cross-sectional study","authors":"Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee","doi":"10.3352/jeehp.2024.21.8","DOIUrl":"10.3352/jeehp.2024.21.8","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to identify challenges and potential improvements in Korea’s medical education accreditation process according to the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019). Meta-evaluation was conducted to survey the experiences and perceptions of stakeholders, including self-assessment committee members, site visit committee members, administrative staff, and medical school professors.</p><p><strong>Methods: </strong>A cross-sectional study was conducted using surveys sent to 40 medical schools. The 332 participants included self-assessment committee members, site visit team members, administrative staff, and medical school professors. The t-test, one-way analysis of variance and the chi-square test were used to analyze and compare opinions on medical education accreditation between the categories of participants.</p><p><strong>Results: </strong>Site visit committee members placed greater importance on the necessity of accreditation than faculty members. A shared positive view on accreditation’s role in improving educational quality was seen among self-evaluation committee members and professors. Administrative staff highly regarded the Korean Institute of Medical Education and Evaluation’s reliability and objectivity, unlike the self-evaluation committee members. Site visit committee members positively perceived the clarity of accreditation standards, differing from self-assessment committee members. Administrative staff were most optimistic about implementing standards. However, the accreditation process encountered challenges, especially in duplicating content and preparing self-evaluation reports. Finally, perceptions regarding the accuracy of final site visit reports varied significantly between the self-evaluation committee members and the site visit committee members.</p><p><strong>Conclusion: </strong>This study revealed diverse views on medical education accreditation, highlighting the need for improved communication, expectation alignment, and stakeholder collaboration to refine the accreditation process and quality.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"8"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11108703/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Max Samuel Yudovich, Elizaveta Makarova, Christian Michael Hague, Jay Dilip Raman
{"title":"Performance of GPT-3.5 and GPT-4 on standardized urology knowledge assessment items in the United States: a descriptive study.","authors":"Max Samuel Yudovich, Elizaveta Makarova, Christian Michael Hague, Jay Dilip Raman","doi":"10.3352/jeehp.2024.21.17","DOIUrl":"https://doi.org/10.3352/jeehp.2024.21.17","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to evaluate the performance of Chat Generative Pre-Trained Transformer (ChatGPT) with respect to standardized urology multiple-choice items in the United States.</p><p><strong>Methods: </strong>In total, 700 multiple-choice urology board exam-style items were submitted to GPT-3.5 and GPT-4, and responses were recorded. Items were categorized based on topic and question complexity (recall, interpretation, and problem-solving). The accuracy of GPT-3.5 and GPT-4 was compared across item types in February 2024.</p><p><strong>Results: </strong>GPT-4 answered 44.4% of items correctly compared to 30.9% for GPT-3.5 (P>0.0001). GPT-4 (vs. GPT-3.5) had higher accuracy with urologic oncology (43.8% vs. 33.9%, P=0.03), sexual medicine (44.3% vs. 27.8%, P=0.046), and pediatric urology (47.1% vs. 27.1%, P=0.012) items. Endourology (38.0% vs. 25.7%, P=0.15), reconstruction and trauma (29.0% vs. 21.0%, P=0.41), and neurourology (49.0% vs. 33.3%, P=0.11) items did not show significant differences in performance across versions. GPT-4 also outperformed GPT-3.5 with respect to recall (45.9% vs. 27.4%, P<0.00001), interpretation (45.6% vs. 31.5%, P=0.0005), and problem-solving (41.8% vs. 34.5%, P=0.56) type items. This difference was not significant for the higher-complexity items.</p><p><strong>Conclusion: </strong>s: ChatGPT performs relatively poorly on standardized multiple-choice urology board exam-style items, with GPT-4 outperforming GPT-3.5. The accuracy was below the proposed minimum passing standards for the American Board of Urology's Continuing Urologic Certification knowledge reinforcement activity (60%). As artificial intelligence progresses in complexity, ChatGPT may become more capable and accurate with respect to board examination items. For now, its responses should be scrutinized.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"17"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141560038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The effect of simulation-based training on problem-solving skills, critical thinking skills, and self-efficacy among nursing students in Vietnam: a before-and-after study.","authors":"Tran Thi Hoang Oanh, Luu Thi Thuy, Ngo Thi Thu Huyen","doi":"10.3352/jeehp.2024.21.24","DOIUrl":"10.3352/jeehp.2024.21.24","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the effect of simulation-based training on nursing students’ problem-solving skills, critical thinking skills, and self-efficacy.</p><p><strong>Methods: </strong>A single-group pretest and posttest study was conducted among 173 second-year nursing students at a public university in Vietnam from May 2021 to July 2022. Each student participated in the adult nursing preclinical practice course, which utilized a moderate-fidelity simulation teaching approach. Instruments including the Personal Problem-Solving Inventory Scale, Critical Thinking Skills Questionnaire, and General Self-Efficacy Questionnaire were employed to measure participants’ problem-solving skills, critical thinking skills, and self-efficacy. Data were analyzed using descriptive statistics and the paired-sample t-test with the significance level set at P<0.05.</p><p><strong>Results: </strong>The mean score of the Personal Problem-Solving Inventory posttest (127.24±12.11) was lower than the pretest score (131.42±16.95), suggesting an improvement in the problem-solving skills of the participants (t172 =2.55, P=0.011). There was no statistically significant difference in critical thinking skills between the pretest and posttest (P=0.854). Self-efficacy among nursing students showed a substantial increase from the pretest (27.91±5.26) to the posttest (28.71±3.81), with t172 =-2.26 and P=0.025.</p><p><strong>Conclusion: </strong>The results suggest that simulation-based training can improve problem-solving skills and increase self-efficacy among nursing students. Therefore, the integration of simulation-based training in nursing education is recommended.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"24"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11480641/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review","authors":"Xiaojun Xu, Yixiao Chen, Jing Miao","doi":"10.3352/jeehp.2024.21.6","DOIUrl":"10.3352/jeehp.2024.21.6","url":null,"abstract":"<p><strong>Background: </strong>ChatGPT is a large language model (LLM) based on artificial intelligence (AI) capable of responding in multiple languages and generating nuanced and highly complex responses. While ChatGPT holds promising applications in medical education, its limitations and potential risks cannot be ignored.</p><p><strong>Methods: </strong>A scoping review was conducted for English articles discussing ChatGPT in the context of medical education published after 2022. A literature search was performed using PubMed/MEDLINE, Embase, and Web of Science databases, and information was extracted from the relevant studies that were ultimately included.</p><p><strong>Results: </strong>ChatGPT exhibits various potential applications in medical education, such as providing personalized learning plans and materials, creating clinical practice simulation scenarios, and assisting in writing articles. However, challenges associated with academic integrity, data accuracy, and potential harm to learning were also highlighted in the literature. The paper emphasizes certain recommendations for using ChatGPT, including the establishment of guidelines. Based on the review, 3 key research areas were proposed: cultivating the ability of medical students to use ChatGPT correctly, integrating ChatGPT into teaching activities and processes, and proposing standards for the use of AI by medical students.</p><p><strong>Conclusion: </strong>ChatGPT has the potential to transform medical education, but careful consideration is required for its full integration. To harness the full potential of ChatGPT in medical education, attention should not only be given to the capabilities of AI but also to its impact on students and teachers.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"6"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11035906/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140132845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos
{"title":"Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis.","authors":"Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos","doi":"10.3352/jeehp.2024.21.5","DOIUrl":"10.3352/jeehp.2024.21.5","url":null,"abstract":"<p><strong>Purpose: </strong>We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.</p><p><strong>Methods: </strong>This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.</p><p><strong>Results: </strong>Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.</p><p><strong>Conclusion: </strong>Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"5"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948917/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Jian Wen Low, Gene Wai Han Chan, Zisheng Li, Yiwen Koh, Chi Loong Jen, Zi Yao Lee, Lenard Tai Win Cheng
{"title":"Comparison of virtual and in-person simulations for sepsis and trauma resuscitation training in Singapore: a randomized controlled trial","authors":"Matthew Jian Wen Low, Gene Wai Han Chan, Zisheng Li, Yiwen Koh, Chi Loong Jen, Zi Yao Lee, Lenard Tai Win Cheng","doi":"10.3352/jeehp.2024.21.33","DOIUrl":"10.3352/jeehp.2024.21.33","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to compare cognitive, non-cognitive, and overall learning outcomes for sepsis and trauma resuscitation skills in novices with virtual patient simulation (VPS) versus in-person simulation (IPS).</p><p><strong>Methods: </strong>A randomized controlled trial was conducted on junior doctors in 1 emergency department from January to December 2022, comparing 70 minutes of VPS (n=19) versus IPS (n=21) in sepsis and trauma resuscitation. Using the nominal group technique, we created skills assessment checklists and determined Bloom’s taxonomy domains for each checklist item. Two blinded raters observed participants leading 1 sepsis and 1 trauma resuscitation simulation. Satisfaction was measured using the Student Satisfaction with Learning Scale (SSLS). The SSLS and checklist scores were analyzed using the Wilcoxon rank sum test and 2-tailed t-test respectively.</p><p><strong>Results: </strong>For sepsis, there was no significant difference between VPS and IPS in overall scores (2.0; 95% confidence interval [CI], -1.4 to 5.4; Cohen’s d=0.38), as well as in items that were cognitive (1.1; 95% CI, -1.5 to 3.7) and not only cognitive (0.9; 95% CI, -0.4 to 2.2). Likewise, for trauma, there was no significant difference in overall scores (-0.9; 95% CI, -4.1 to 2.3; Cohen’s d=0.19), as well as in items that were cognitive (-0.3; 95% CI, -2.8 to 2.1) and not only cognitive (-0.6; 95% CI, -2.4 to 1.3). The median SSLS scores were lower with VPS than with IPS (-3.0; 95% CI, -1.0 to -5.0).</p><p><strong>Conclusion: </strong>For novices, there were no major differences in overall and non-cognitive learning outcomes for sepsis and trauma resuscitation between VPS and IPS. Learners were more satisfied with IPS than with VPS (clinicaltrials.gov identifier: NCT05201950).</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"33"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142648693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Presidential address 2024: the expansion of computer-based testing to numerous health professions licensing examinations in Korea, preparation of computer-based practical tests, and adoption of the medical metaverse.","authors":"Hyunjoo Pai","doi":"10.3352/jeehp.2024.21.2","DOIUrl":"10.3352/jeehp.2024.21.2","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"2"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139906639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}