Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-Pyng Chung
{"title":"Development and validity evidence for the resident-led large group teaching assessment instrument in the United States: a methodological study.","authors":"Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-Pyng Chung","doi":"10.3352/jeehp.2024.21.3","DOIUrl":"10.3352/jeehp.2024.21.3","url":null,"abstract":"<p><strong>Purpose: </strong>Despite educational mandates to assess resident teaching competence, limited instruments with validity evidence exist for this purpose. Existing instruments do not allow faculty to assess resident-led teaching in a large group format or whether teaching was interactive. This study gathers validity evidence on the use of the Resident-led Large Group Teaching Assessment Instrument (Relate), an instrument used by faculty to assess resident teaching competency. Relate comprises 23 behaviors divided into six elements: learning environment, goals and objectives, content of talk, promotion of understanding and retention, session management, and closure.</p><p><strong>Methods: </strong>Messick's unified validity framework was used for this study. Investigators used video recordings of resident-led teaching from three pediatric residency programs to develop Relate and a rater guidebook. Faculty were trained on instrument use through frame-of-reference training. Resident teaching at all sites was video-recorded during 2018-2019. Two trained faculty raters assessed each video. Descriptive statistics on performance were obtained. Validity evidence sources include: rater training effect (response process), reliability and variability (internal structure), and impact on Milestones assessment (relations to other variables).</p><p><strong>Results: </strong>Forty-eight videos, from 16 residents, were analyzed. Rater training improved inter-rater reliability from 0.04 to 0.64. The Φ-coefficient reliability was 0.50. There was a significant correlation between overall Relate performance and the pediatric teaching Milestone, r = 0.34, P = .019.</p><p><strong>Conclusion: </strong>Relate provides validity evidence with sufficient reliability to measure resident-led large-group teaching competence.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"3"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139933504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ChatGPT (GPT-4) passed the Japanese National License Examination for Pharmacists in 2022, answering all items including those with diagrams: a descriptive study.","authors":"Hiroyasu Sato, Katsuhiko Ogasawara","doi":"10.3352/jeehp.2024.21.4","DOIUrl":"10.3352/jeehp.2024.21.4","url":null,"abstract":"<p><strong>Purpose: </strong>The objective of this study was to assess the performance of ChatGPT (GPT-4) on all items, including those with diagrams, in the Japanese National License Examination for Pharmacists (JNLEP) and compare it with the previous GPT-3.5 model’s performance.</p><p><strong>Methods: </strong>The 107th JNLEP, conducted in 2022, with 344 items input into the GPT-4 model, was targeted for this study. Separately, 284 items, excluding those with diagrams, were entered into the GPT-3.5 model. The answers were categorized and analyzed to determine accuracy rates based on categories, subjects, and presence or absence of diagrams. The accuracy rates were compared to the main passing criteria (overall accuracy rate ≥62.9%).</p><p><strong>Results: </strong>The overall accuracy rate for all items in the 107th JNLEP in GPT-4 was 72.5%, successfully meeting all the passing criteria. For the set of items without diagrams, the accuracy rate was 80.0%, which was significantly higher than that of the GPT-3.5 model (43.5%). The GPT-4 model demonstrated an accuracy rate of 36.1% for items that included diagrams.</p><p><strong>Conclusion: </strong>Advancements that allow GPT-4 to process images have made it possible for LLMs to answer all items in medical-related license examinations. This study’s findings confirm that ChatGPT (GPT-4) possesses sufficient knowledge to meet the passing criteria.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"4"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Events related to medication errors and related factors involving nurses’ behavior to reduce medication errors in Japan: a Bayesian network modeling-based factor analysis and scenario analysis.","authors":"Naotaka Sugimura, Katsuhiko Ogasawara","doi":"10.3352/jeehp.2024.21.12","DOIUrl":"10.3352/jeehp.2024.21.12","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to identify the relationships between medication errors and the factors affecting nurses’ knowledge and behavior in Japan using Bayesian network modeling. It also aimed to identify important factors through scenario analysis with consideration of nursing students’ and nurses’ education regarding patient safety and medications.</p><p><strong>Methods: </strong>We used mixed methods. First, error events related to medications and related factors were qualitatively extracted from 119 actual incident reports in 2022 from the database of the Japan Council for Quality Health Care. These events and factors were then quantitatively evaluated in a flow model using Bayesian network, and a scenario analysis was conducted to estimate the posterior probabilities of events when the prior probabilities of some factors were 0%.</p><p><strong>Results: </strong>There were 10 types of events related to medication errors. A 5-layer flow model was created using Bayesian network analysis. The scenario analysis revealed that “failure to confirm the 5 rights,” “unfamiliarity with operations of medications,” “insufficient knowledge of medications,” and “assumptions and forgetfulness” were factors that were significantly associated with the occurrence of medical errors.</p><p><strong>Conclusion: </strong>This study provided an estimate of the effects of mitigating nurses’ behavioral factors that trigger medication errors. The flow model itself can also be used as an educational tool to reflect on behavior when incidents occur. It is expected that patient safety education will be recognized as a major element of nursing education worldwide and that an integrated curriculum will be developed.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"12"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11223988/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141301850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Erratum: Impact of a change from A-F grading to honors/pass/fail grading on academic performance at Yonsei University College of Medicine in Korea: a cross-sectional serial mediation analysis.","authors":"","doi":"10.3352/jeehp.2024.21.35","DOIUrl":"10.3352/jeehp.2024.21.35","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"35"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11637594/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marcos Carvalho Borges, Luciane Loures Santos, Paulo Henrique Manso, Elaine Christine Dantas Moisés, Pedro Soler Coltro, Priscilla Costa Fonseca, Paulo Roberto Alves Gentil, Rodrigo de Carvalho Santana, Lucas Faria Rodrigues, Benedito Carlos Maciel, Hilton Marcos Alves Ricz
{"title":"Increased accessibility of computer-based testing for residency application to a hospital in Brazil with item characteristics comparable to paper-based testing: a psychometric study","authors":"Marcos Carvalho Borges, Luciane Loures Santos, Paulo Henrique Manso, Elaine Christine Dantas Moisés, Pedro Soler Coltro, Priscilla Costa Fonseca, Paulo Roberto Alves Gentil, Rodrigo de Carvalho Santana, Lucas Faria Rodrigues, Benedito Carlos Maciel, Hilton Marcos Alves Ricz","doi":"10.3352/jeehp.2024.21.32","DOIUrl":"10.3352/jeehp.2024.21.32","url":null,"abstract":"<p><strong>Purpose: </strong>With the coronavirus disease 2019 pandemic, online high-stakes exams have become a viable alternative. This study evaluated the feasibility of computer-based testing (CBT) for medical residency applications in Brazil and its impacts on item quality and applicants’ access compared to paper-based testing.</p><p><strong>Methods: </strong>In 2020, an online CBT was conducted in a Ribeirao Preto Clinical Hospital in Brazil. In total, 120 multiple-choice question items were constructed. Two years later, the exam was performed as paper-based testing. Item construction processes were similar for both exams. Difficulty and discrimination indexes, point-biserial coefficient, difficulty, discrimination, guessing parameters, and Cronbach’s α coefficient were measured based on the item response and classical test theories. Internet stability for applicants was monitored.</p><p><strong>Results: </strong>In 2020, 4,846 individuals (57.1% female, mean age of 26.64±3.37 years) applied to the residency program, versus 2,196 individuals (55.2% female, mean age of 26.47±3.20 years) in 2022. For CBT, there was an increase of 2,650 applicants (120.7%), albeit with significant differences in demographic characteristics. There was a significant increase in applicants from more distant and lower-income Brazilian regions, such as the North (5.6% vs. 2.7%) and Northeast (16.9% vs. 9.0%). No significant differences were found in difficulty and discrimination indexes, point-biserial coefficients, and Cronbach’s α coefficients between the 2 exams.</p><p><strong>Conclusion: </strong>Online CBT with multiple-choice questions was a viable format for a residency application exam, improving accessibility without compromising exam integrity and quality.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"32"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11637595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Journal of Educational Evaluation for Health Professions received the top-ranking Journal Impact Factor—9.3—in the category of Education, Scientific Disciplines in the 2023 Journal Citation Ranking by Clarivate","authors":"Sun Huh","doi":"10.3352/jeehp.2024.21.16","DOIUrl":"10.3352/jeehp.2024.21.16","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"16"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11255473/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jason Betson, Erich Christian Fein, David Long, Peter Horrocks
{"title":"Feasibility of utilizing functional near-infrared spectroscopy to measure the cognitive load of paramedicine students undertaking high-acuity clinical simulations in Australia: a case study.","authors":"Jason Betson, Erich Christian Fein, David Long, Peter Horrocks","doi":"10.3352/jeehp.2024.21.38","DOIUrl":"10.3352/jeehp.2024.21.38","url":null,"abstract":"<p><strong>Purpose: </strong>Paramedicine education often uses high-fidelity simulations that mimic real-life emergencies. These experiences can trigger stress responses characterized by physiological changes, including alterations in cerebral blood flow and oxygenation. Functional near-infrared spectroscopy (fNIRS) is emerging as a promising tool for assessing cognitive stress in educational settings.</p><p><strong>Methods: </strong>Eight final-year undergraduate paramedicine students completed 2 high-acuity scenarios 7 days apart. Real-time continuous recording of cerebral blood flow and oxygenation levels in the prefrontal cortex was undertaken via fNIRS as a means of assessing neural activity during stressful scenarios.</p><p><strong>Results: </strong>fNIRS accurately determined periods of increased cerebral oxygenation when participants were undertaking highly technical skills or making significant clinical decisions.</p><p><strong>Conclusion: </strong>fNIRS holds potential for objectively measuring the cognitive load in undergraduate paramedicine students. By providing real-time insights into neurophysiological responses, fNIRS may enhance training outcomes in paramedicine programs and improve student well-being (Australian New Zealand Clinical Trials Registry: ACTRN12623001214628).</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"38"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11717433/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"History of the medical licensure system in Korea from the late 1800s to 1992.","authors":"Sang-Ik Hwang","doi":"10.3352/jeehp.2024.21.36","DOIUrl":"10.3352/jeehp.2024.21.36","url":null,"abstract":"<p><p>The introduction of modern Western medicine in the late 19th century, notably through vaccination initiatives, marked the beginning of governmental involvement in medical licensure, with the licensing of doctors who performed vaccinations. The establishment of the national medical school \"Euihakkyo\" in 1899 further formalized medical education and licensure, granting graduates the privilege to practice medicine without additional examinations. The enactment of the Regulations on Doctors in 1900 by the Joseon government aimed to define doctor qualifications, including modern and traditional practitioners, comprehensively. However, resistance from the traditional medical community hindered its full implementation. During the Japanese colonial occupation of the Korean Peninsula from 1910 to 1945, the medical licensure system was controlled by colonial authorities, leading to the marginalization of traditional Korean medicine and the imposition of imperial hierarchical structures. Following liberation in 1945 from Japanese colonial rule, the Korean government undertook significant reforms, culminating in the National Medical Law, which was enacted in 1951. This law redefined doctor qualifications and reinstated the status of traditional Korean medicine. The introduction of national examinations for physicians increased state involvement in ensuring medical competence. The privatization of the Korean Medical Licensing Examination led to the establishment of the Korea Health Personnel Licensing Examination Institute in 1992, which assumed responsibility for administering licensing examinations for all healthcare workers. This shift reflected a move towards specialized management of professional standards. The evolution of the medical licensure system in Korea illustrates a dynamic process shaped by the historical context, balancing the protection of public health with the rights of medical practitioners.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"36"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11894032/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142956539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julie Youm, Jennifer Christner, Kevin Hittle, Paul Ko, Cinda Stone, Angela D Blood, Samara Ginzburg
{"title":"The 6 degrees of curriculum integration in medical education in the United States","authors":"Julie Youm, Jennifer Christner, Kevin Hittle, Paul Ko, Cinda Stone, Angela D Blood, Samara Ginzburg","doi":"10.3352/jeehp.2024.21.15","DOIUrl":"10.3352/jeehp.2024.21.15","url":null,"abstract":"<p><p>Despite explicit expectations and accreditation requirements for integrated curriculum, there needs to be more clarity around an accepted common definition, best practices for implementation, and criteria for successful curriculum integration. To address the lack of consensus surrounding integration, we reviewed the literature and herein propose a definition for curriculum integration for the medical education audience. We further believe that medical education is ready to move beyond “horizontal” (1-dimensional) and “vertical” (2-dimensional) integration and propose a model of “6 degrees of curriculum integration” to expand the 2-dimensional concept for future designs of medical education programs and best prepare learners to meet the needs of patients. These 6 degrees include: interdisciplinary, timing and sequencing, instruction and assessment, incorporation of basic and clinical sciences, knowledge and skills-based competency progression, and graduated responsibilities in patient care. We encourage medical educators to look beyond 2-dimensional integration to this holistic and interconnected representation of curriculum integration.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"15"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11261157/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141318490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elio Stefan Arruzza, Carla Marie Evangelista, Minh Chau
{"title":"The performance of ChatGPT-4.0o in medical imaging evaluation: a cross-sectional study","authors":"Elio Stefan Arruzza, Carla Marie Evangelista, Minh Chau","doi":"10.3352/jeehp.2024.21.29","DOIUrl":"10.3352/jeehp.2024.21.29","url":null,"abstract":"<p><p>This study investigated the performance of ChatGPT-4.0o in evaluating the quality of positioning in radiographic images. Thirty radiographs depicting a variety of knee, elbow, ankle, hand, pelvis, and shoulder projections were produced using anthropomorphic phantoms and uploaded to ChatGPT-4.0o. The model was prompted to provide a solution to identify any positioning errors with justification and offer improvements. A panel of radiographers assessed the solutions for radiographic quality based on established positioning criteria, with a grading scale of 1–5. In only 20% of projections, ChatGPT-4.0o correctly recognized all errors with justifications and offered correct suggestions for improvement. The most commonly occurring score was 3 (9 cases, 30%), wherein the model recognized at least 1 specific error and provided a correct improvement. The mean score was 2.9. Overall, low accuracy was demonstrated, with most projections receiving only partially correct solutions. The findings reinforce the importance of robust radiography education and clinical experience.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"29"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11586623/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}