Journal of Educational Evaluation for Health Professions最新文献

筛选
英文 中文
Challenges and potential improvements in the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019) derived through meta-evaluation: a cross-sectional study 通过荟萃评估得出的《2019 年韩国医学教育与评估研究院认证标准》(ASK2019)的挑战和潜在改进:一项横断面研究。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-04-02 DOI: 10.3352/jeehp.2024.21.8
Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee
{"title":"Challenges and potential improvements in the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019) derived through meta-evaluation: a cross-sectional study","authors":"Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee","doi":"10.3352/jeehp.2024.21.8","DOIUrl":"10.3352/jeehp.2024.21.8","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to identify challenges and potential improvements in Korea’s medical education accreditation process according to the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019). Meta-evaluation was conducted to survey the experiences and perceptions of stakeholders, including self-assessment committee members, site visit committee members, administrative staff, and medical school professors.</p><p><strong>Methods: </strong>A cross-sectional study was conducted using surveys sent to 40 medical schools. The 332 participants included self-assessment committee members, site visit team members, administrative staff, and medical school professors. The t-test, one-way analysis of variance and the chi-square test were used to analyze and compare opinions on medical education accreditation between the categories of participants.</p><p><strong>Results: </strong>Site visit committee members placed greater importance on the necessity of accreditation than faculty members. A shared positive view on accreditation’s role in improving educational quality was seen among self-evaluation committee members and professors. Administrative staff highly regarded the Korean Institute of Medical Education and Evaluation’s reliability and objectivity, unlike the self-evaluation committee members. Site visit committee members positively perceived the clarity of accreditation standards, differing from self-assessment committee members. Administrative staff were most optimistic about implementing standards. However, the accreditation process encountered challenges, especially in duplicating content and preparing self-evaluation reports. Finally, perceptions regarding the accuracy of final site visit reports varied significantly between the self-evaluation committee members and the site visit committee members.</p><p><strong>Conclusion: </strong>This study revealed diverse views on medical education accreditation, highlighting the need for improved communication, expectation alignment, and stakeholder collaboration to refine the accreditation process and quality.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11108703/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of GPT-3.5 and GPT-4 on standardized urology knowledge assessment items in the United States: a descriptive study. 美国 GPT-3.5 和 GPT-4 在标准化泌尿科知识评估项目上的表现:一项描述性研究。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-07-08 DOI: 10.3352/jeehp.2024.21.17
Max Samuel Yudovich, Elizaveta Makarova, Christian Michael Hague, Jay Dilip Raman
{"title":"Performance of GPT-3.5 and GPT-4 on standardized urology knowledge assessment items in the United States: a descriptive study.","authors":"Max Samuel Yudovich, Elizaveta Makarova, Christian Michael Hague, Jay Dilip Raman","doi":"10.3352/jeehp.2024.21.17","DOIUrl":"https://doi.org/10.3352/jeehp.2024.21.17","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to evaluate the performance of Chat Generative Pre-Trained Transformer (ChatGPT) with respect to standardized urology multiple-choice items in the United States.</p><p><strong>Methods: </strong>In total, 700 multiple-choice urology board exam-style items were submitted to GPT-3.5 and GPT-4, and responses were recorded. Items were categorized based on topic and question complexity (recall, interpretation, and problem-solving). The accuracy of GPT-3.5 and GPT-4 was compared across item types in February 2024.</p><p><strong>Results: </strong>GPT-4 answered 44.4% of items correctly compared to 30.9% for GPT-3.5 (P>0.0001). GPT-4 (vs. GPT-3.5) had higher accuracy with urologic oncology (43.8% vs. 33.9%, P=0.03), sexual medicine (44.3% vs. 27.8%, P=0.046), and pediatric urology (47.1% vs. 27.1%, P=0.012) items. Endourology (38.0% vs. 25.7%, P=0.15), reconstruction and trauma (29.0% vs. 21.0%, P=0.41), and neurourology (49.0% vs. 33.3%, P=0.11) items did not show significant differences in performance across versions. GPT-4 also outperformed GPT-3.5 with respect to recall (45.9% vs. 27.4%, P<0.00001), interpretation (45.6% vs. 31.5%, P=0.0005), and problem-solving (41.8% vs. 34.5%, P=0.56) type items. This difference was not significant for the higher-complexity items.</p><p><strong>Conclusion: </strong>s: ChatGPT performs relatively poorly on standardized multiple-choice urology board exam-style items, with GPT-4 outperforming GPT-3.5. The accuracy was below the proposed minimum passing standards for the American Board of Urology's Continuing Urologic Certification knowledge reinforcement activity (60%). As artificial intelligence progresses in complexity, ChatGPT may become more capable and accurate with respect to board examination items. For now, its responses should be scrutinized.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141560038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effect of simulation-based training on problem-solving skills, critical thinking skills, and self-efficacy among nursing students in Vietnam: a before-and-after study. 模拟训练对越南护理专业学生解决问题能力、批判性思维能力和自我效能感的影响:一项前后对比研究。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-09-23 DOI: 10.3352/jeehp.2024.21.24
Tran Thi Hoang Oanh, Luu Thi Thuy, Ngo Thi Thu Huyen
{"title":"The effect of simulation-based training on problem-solving skills, critical thinking skills, and self-efficacy among nursing students in Vietnam: a before-and-after study.","authors":"Tran Thi Hoang Oanh, Luu Thi Thuy, Ngo Thi Thu Huyen","doi":"10.3352/jeehp.2024.21.24","DOIUrl":"https://doi.org/10.3352/jeehp.2024.21.24","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the effect of simulation-based training on nursing students' problem-solving skills, critical thinking skills, and self-efficacy.</p><p><strong>Methods: </strong>A single-group pretest and posttest study was conducted among 173 second-year nursing students at a public university in Vietnam from May 2021 to July 2022. Each student participated in the adult nursing preclinical practice course, which utilized a moderate-fidelity simulation teaching approach. Instruments including the Personal Problem-Solving Inventory Scale, Critical Thinking Skills Questionnaire, and General Self-Efficacy Questionnaire were employed to measure participants' problem-solving skills, critical thinking skills, and self-efficacy. Data were analyzed using descriptive statistics and the paired-sample t-test with the significance level set at P<0.05.</p><p><strong>Results: </strong>The mean score of the Personal Problem-Solving Inventory posttest (127.24±12.11) was lower than the pretest score (131.42±16.95), suggesting an improvement in the problem-solving skills of the participants (t172=2.55, P=0.011). There was no statistically significant difference in critical thinking skills between the pretest and posttest (P=0.854). Self-efficacy among nursing students showed a substantial increase from the pretest (27.91±5.26) to the posttest (28.71±3.81), with t172=-2.26 and P=0.025.</p><p><strong>Conclusion: </strong>The results suggest that simulation-based training can improve problem-solving skills and increase self-efficacy among nursing students. Therefore, the integration of simulation-based training in nursing education is recommended.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new performance evaluation indicator for the LEE Jong-wook Fellowship Program of Korea Foundation for International Healthcare to better assess its long-term educational impacts: a Delphi study. 韩国国际医疗基金会李钟郁奖学金项目的新绩效评估指标,以更好地评估其长期教育影响:德尔菲研究。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-10-02 DOI: 10.3352/jeehp.2024.21.27
Minkyung Oh, Bo Young Yoon
{"title":"A new performance evaluation indicator for the LEE Jong-wook Fellowship Program of Korea Foundation for International Healthcare to better assess its long-term educational impacts: a Delphi study.","authors":"Minkyung Oh, Bo Young Yoon","doi":"10.3352/jeehp.2024.21.27","DOIUrl":"https://doi.org/10.3352/jeehp.2024.21.27","url":null,"abstract":"<p><strong>Purpose: </strong>The Dr. LEE Jong-wook Fellowship Program, established by the Korea Foundation for International Healthcare (KOFIH), aims to strengthen healthcare capacity in partner countries. The aim of the study was to develop new performance evaluation indicators for the program to better assess long-term educational impact across various courses and professional roles.</p><p><strong>Methods: </strong>A 3-stage process was employed. First, a literature review of established evaluation models (Kirkpatrick's 4 levels, CIPP model, OECD DAC criteria) was conducted to devise evaluation criteria. Second, these criteria were validated via a 2-round Delphi survey with 18 experts in training projects from May 2021 to June 2021. Third, the relative importance of the evaluation criteria was determined using the analytic hierarchy process (AHP), calculating weights and ensuring consistency through the consistency index (CI) and consistency ratio (CR), with CR values below 0.1 indicating acceptable consistency.</p><p><strong>Results: </strong>The literature review led to a combined evaluation model, resulting in 4 evaluation areas, 20 items, and 92 indicators. The Delphi surveys confirmed the validity of these indicators, with content validity ratio values exceeding 0.444. The AHP analysis assigned weights to each indicator, and CR values below 0.1 indicated consistency. The final set of evaluation indicators was confirmed through a workshop with KFIH and adopted as the new evaluation tool.</p><p><strong>Conclusion: </strong>The developed evaluation framework provides a comprehensive tool for assessing the long-term outcomes of the Dr. LEE Jong-wook Fellowship Program. It enhances evaluation capabilities and supports improvements in the training program's effectiveness and international healthcare collaboration.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis. 从美国医科学生的反馈遭遇中发现临床实习期间的社会学习生态系统:内容分析。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-02-28 DOI: 10.3352/jeehp.2024.21.5
Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos
{"title":"Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis.","authors":"Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos","doi":"10.3352/jeehp.2024.21.5","DOIUrl":"10.3352/jeehp.2024.21.5","url":null,"abstract":"<p><strong>Purpose: </strong>We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.</p><p><strong>Methods: </strong>This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.</p><p><strong>Results: </strong>Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.</p><p><strong>Conclusion: </strong>Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948917/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review 包括 ChatGPT 在内的大型语言模型在医学教育中的机遇、挑战和未来发展方向:系统性范围审查。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-03-15 DOI: 10.3352/jeehp.2024.21.6
Xiaojun Xu, Yixiao Chen, Jing Miao
{"title":"Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review","authors":"Xiaojun Xu, Yixiao Chen, Jing Miao","doi":"10.3352/jeehp.2024.21.6","DOIUrl":"10.3352/jeehp.2024.21.6","url":null,"abstract":"<p><strong>Background: </strong>ChatGPT is a large language model (LLM) based on artificial intelligence (AI) capable of responding in multiple languages and generating nuanced and highly complex responses. While ChatGPT holds promising applications in medical education, its limitations and potential risks cannot be ignored.</p><p><strong>Methods: </strong>A scoping review was conducted for English articles discussing ChatGPT in the context of medical education published after 2022. A literature search was performed using PubMed/MEDLINE, Embase, and Web of Science databases, and information was extracted from the relevant studies that were ultimately included.</p><p><strong>Results: </strong>ChatGPT exhibits various potential applications in medical education, such as providing personalized learning plans and materials, creating clinical practice simulation scenarios, and assisting in writing articles. However, challenges associated with academic integrity, data accuracy, and potential harm to learning were also highlighted in the literature. The paper emphasizes certain recommendations for using ChatGPT, including the establishment of guidelines. Based on the review, 3 key research areas were proposed: cultivating the ability of medical students to use ChatGPT correctly, integrating ChatGPT into teaching activities and processes, and proposing standards for the use of AI by medical students.</p><p><strong>Conclusion: </strong>ChatGPT has the potential to transform medical education, but careful consideration is required for its full integration. To harness the full potential of ChatGPT in medical education, attention should not only be given to the capabilities of AI but also to its impact on students and teachers.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11035906/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140132845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Events related to medication errors and related factors involving nurses’ behavior to reduce medication errors in Japan: a Bayesian network modeling-based factor analysis and scenario analysis. 日本与用药错误有关的事件以及涉及护士减少用药错误行为的相关因素:基于贝叶斯网络建模的因素分析和情景分析。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-06-11 DOI: 10.3352/jeehp.2024.21.12
Naotaka Sugimura, Katsuhiko Ogasawara
{"title":"Events related to medication errors and related factors involving nurses’ behavior to reduce medication errors in Japan: a Bayesian network modeling-based factor analysis and scenario analysis.","authors":"Naotaka Sugimura, Katsuhiko Ogasawara","doi":"10.3352/jeehp.2024.21.12","DOIUrl":"10.3352/jeehp.2024.21.12","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to identify the relationships between medication errors and the factors affecting nurses’ knowledge and behavior in Japan using Bayesian network modeling. It also aimed to identify important factors through scenario analysis with consideration of nursing students’ and nurses’ education regarding patient safety and medications.</p><p><strong>Methods: </strong>We used mixed methods. First, error events related to medications and related factors were qualitatively extracted from 119 actual incident reports in 2022 from the database of the Japan Council for Quality Health Care. These events and factors were then quantitatively evaluated in a flow model using Bayesian network, and a scenario analysis was conducted to estimate the posterior probabilities of events when the prior probabilities of some factors were 0%.</p><p><strong>Results: </strong>There were 10 types of events related to medication errors. A 5-layer flow model was created using Bayesian network analysis. The scenario analysis revealed that “failure to confirm the 5 rights,” “unfamiliarity with operations of medications,” “insufficient knowledge of medications,” and “assumptions and forgetfulness” were factors that were significantly associated with the occurrence of medical errors.</p><p><strong>Conclusion: </strong>This study provided an estimate of the effects of mitigating nurses’ behavioral factors that trigger medication errors. The flow model itself can also be used as an educational tool to reflect on behavior when incidents occur. It is expected that patient safety education will be recognized as a major element of nursing education worldwide and that an integrated curriculum will be developed.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11223988/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141301850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and validity evidence for the resident-led large group teaching assessment instrument in the United States: a methodological study. 美国住院医师主导的大型小组教学评估工具的开发和有效性证据:一项方法研究。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-02-23 DOI: 10.3352/jeehp.2024.21.3
Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-Pyng Chung
{"title":"Development and validity evidence for the resident-led large group teaching assessment instrument in the United States: a methodological study.","authors":"Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-Pyng Chung","doi":"10.3352/jeehp.2024.21.3","DOIUrl":"10.3352/jeehp.2024.21.3","url":null,"abstract":"<p><strong>Purpose: </strong>Despite educational mandates to assess resident teaching competence, limited instruments with validity evidence exist for this purpose. Existing instruments do not allow faculty to assess resident-led teaching in a large group format or whether teaching was interactive. This study gathers validity evidence on the use of the Resident-led Large Group Teaching Assessment Instrument (Relate), an instrument used by faculty to assess resident teaching competency. Relate comprises 23 behaviors divided into six elements: learning environment, goals and objectives, content of talk, promotion of understanding and retention, session management, and closure.</p><p><strong>Methods: </strong>Messick's unified validity framework was used for this study. Investigators used video recordings of resident-led teaching from three pediatric residency programs to develop Relate and a rater guidebook. Faculty were trained on instrument use through frame-of-reference training. Resident teaching at all sites was video-recorded during 2018-2019. Two trained faculty raters assessed each video. Descriptive statistics on performance were obtained. Validity evidence sources include: rater training effect (response process), reliability and variability (internal structure), and impact on Milestones assessment (relations to other variables).</p><p><strong>Results: </strong>Forty-eight videos, from 16 residents, were analyzed. Rater training improved inter-rater reliability from 0.04 to 0.64. The Φ-coefficient reliability was 0.50. There was a significant correlation between overall Relate performance and the pediatric teaching Milestone, r = 0.34, P = .019.</p><p><strong>Conclusion: </strong>Relate provides validity evidence with sufficient reliability to measure resident-led large-group teaching competence.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139933504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT (GPT-4) passed the Japanese National License Examination for Pharmacists in 2022, answering all items including those with diagrams: a descriptive study. ChatGPT (GPT-4) 在 2022 年通过了日本全国药剂师执照考试,回答了包括图表在内的所有题目:一项描述性研究。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-02-28 DOI: 10.3352/jeehp.2024.21.4
Hiroyasu Sato, Katsuhiko Ogasawara
{"title":"ChatGPT (GPT-4) passed the Japanese National License Examination for Pharmacists in 2022, answering all items including those with diagrams: a descriptive study.","authors":"Hiroyasu Sato, Katsuhiko Ogasawara","doi":"10.3352/jeehp.2024.21.4","DOIUrl":"10.3352/jeehp.2024.21.4","url":null,"abstract":"<p><strong>Purpose: </strong>The objective of this study was to assess the performance of ChatGPT (GPT-4) on all items, including those with diagrams, in the Japanese National License Examination for Pharmacists (JNLEP) and compare it with the previous GPT-3.5 model’s performance.</p><p><strong>Methods: </strong>The 107th JNLEP, conducted in 2022, with 344 items input into the GPT-4 model, was targeted for this study. Separately, 284 items, excluding those with diagrams, were entered into the GPT-3.5 model. The answers were categorized and analyzed to determine accuracy rates based on categories, subjects, and presence or absence of diagrams. The accuracy rates were compared to the main passing criteria (overall accuracy rate ≥62.9%).</p><p><strong>Results: </strong>The overall accuracy rate for all items in the 107th JNLEP in GPT-4 was 72.5%, successfully meeting all the passing criteria. For the set of items without diagrams, the accuracy rate was 80.0%, which was significantly higher than that of the GPT-3.5 model (43.5%). The GPT-4 model demonstrated an accuracy rate of 36.1% for items that included diagrams.</p><p><strong>Conclusion: </strong>Advancements that allow GPT-4 to process images have made it possible for LLMs to answer all items in medical-related license examinations. This study’s findings confirm that ChatGPT (GPT-4) possesses sufficient knowledge to meet the passing criteria.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Presidential address 2024: the expansion of computer-based testing to numerous health professions licensing examinations in Korea, preparation of computer-based practical tests, and adoption of the medical metaverse. 2024 年总统演讲:将计算机辅助考试扩展到韩国众多卫生专业执照考试中,准备计算机辅助实践考试,并采用医学元网。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-02-20 DOI: 10.3352/jeehp.2024.21.2
Hyunjoo Pai
{"title":"Presidential address 2024: the expansion of computer-based testing to numerous health professions licensing examinations in Korea, preparation of computer-based practical tests, and adoption of the medical metaverse.","authors":"Hyunjoo Pai","doi":"10.3352/jeehp.2024.21.2","DOIUrl":"10.3352/jeehp.2024.21.2","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139906639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信