Journal of Educational Evaluation for Health Professions最新文献

筛选
英文 中文
The effect of simulation-based training on problem-solving skills, critical thinking skills, and self-efficacy among nursing students in Vietnam: a before-and-after study. 模拟训练对越南护理专业学生解决问题能力、批判性思维能力和自我效能感的影响:一项前后对比研究。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-09-23 DOI: 10.3352/jeehp.2024.21.24
Tran Thi Hoang Oanh, Luu Thi Thuy, Ngo Thi Thu Huyen
{"title":"The effect of simulation-based training on problem-solving skills, critical thinking skills, and self-efficacy among nursing students in Vietnam: a before-and-after study.","authors":"Tran Thi Hoang Oanh, Luu Thi Thuy, Ngo Thi Thu Huyen","doi":"10.3352/jeehp.2024.21.24","DOIUrl":"10.3352/jeehp.2024.21.24","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the effect of simulation-based training on nursing students’ problem-solving skills, critical thinking skills, and self-efficacy.</p><p><strong>Methods: </strong>A single-group pretest and posttest study was conducted among 173 second-year nursing students at a public university in Vietnam from May 2021 to July 2022. Each student participated in the adult nursing preclinical practice course, which utilized a moderate-fidelity simulation teaching approach. Instruments including the Personal Problem-Solving Inventory Scale, Critical Thinking Skills Questionnaire, and General Self-Efficacy Questionnaire were employed to measure participants’ problem-solving skills, critical thinking skills, and self-efficacy. Data were analyzed using descriptive statistics and the paired-sample t-test with the significance level set at P<0.05.</p><p><strong>Results: </strong>The mean score of the Personal Problem-Solving Inventory posttest (127.24±12.11) was lower than the pretest score (131.42±16.95), suggesting an improvement in the problem-solving skills of the participants (t172 =2.55, P=0.011). There was no statistically significant difference in critical thinking skills between the pretest and posttest (P=0.854). Self-efficacy among nursing students showed a substantial increase from the pretest (27.91±5.26) to the posttest (28.71±3.81), with t172 =-2.26 and P=0.025.</p><p><strong>Conclusion: </strong>The results suggest that simulation-based training can improve problem-solving skills and increase self-efficacy among nursing students. Therefore, the integration of simulation-based training in nursing education is recommended.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"24"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11480641/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of GPT-3.5 and GPT-4 on standardized urology knowledge assessment items in the United States: a descriptive study. 美国 GPT-3.5 和 GPT-4 在标准化泌尿科知识评估项目上的表现:一项描述性研究。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-07-08 DOI: 10.3352/jeehp.2024.21.17
Max Samuel Yudovich, Elizaveta Makarova, Christian Michael Hague, Jay Dilip Raman
{"title":"Performance of GPT-3.5 and GPT-4 on standardized urology knowledge assessment items in the United States: a descriptive study.","authors":"Max Samuel Yudovich, Elizaveta Makarova, Christian Michael Hague, Jay Dilip Raman","doi":"10.3352/jeehp.2024.21.17","DOIUrl":"https://doi.org/10.3352/jeehp.2024.21.17","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to evaluate the performance of Chat Generative Pre-Trained Transformer (ChatGPT) with respect to standardized urology multiple-choice items in the United States.</p><p><strong>Methods: </strong>In total, 700 multiple-choice urology board exam-style items were submitted to GPT-3.5 and GPT-4, and responses were recorded. Items were categorized based on topic and question complexity (recall, interpretation, and problem-solving). The accuracy of GPT-3.5 and GPT-4 was compared across item types in February 2024.</p><p><strong>Results: </strong>GPT-4 answered 44.4% of items correctly compared to 30.9% for GPT-3.5 (P>0.0001). GPT-4 (vs. GPT-3.5) had higher accuracy with urologic oncology (43.8% vs. 33.9%, P=0.03), sexual medicine (44.3% vs. 27.8%, P=0.046), and pediatric urology (47.1% vs. 27.1%, P=0.012) items. Endourology (38.0% vs. 25.7%, P=0.15), reconstruction and trauma (29.0% vs. 21.0%, P=0.41), and neurourology (49.0% vs. 33.3%, P=0.11) items did not show significant differences in performance across versions. GPT-4 also outperformed GPT-3.5 with respect to recall (45.9% vs. 27.4%, P<0.00001), interpretation (45.6% vs. 31.5%, P=0.0005), and problem-solving (41.8% vs. 34.5%, P=0.56) type items. This difference was not significant for the higher-complexity items.</p><p><strong>Conclusion: </strong>s: ChatGPT performs relatively poorly on standardized multiple-choice urology board exam-style items, with GPT-4 outperforming GPT-3.5. The accuracy was below the proposed minimum passing standards for the American Board of Urology's Continuing Urologic Certification knowledge reinforcement activity (60%). As artificial intelligence progresses in complexity, ChatGPT may become more capable and accurate with respect to board examination items. For now, its responses should be scrutinized.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"17"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141560038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis. 从美国医科学生的反馈遭遇中发现临床实习期间的社会学习生态系统:内容分析。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-02-28 DOI: 10.3352/jeehp.2024.21.5
Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos
{"title":"Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis.","authors":"Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos","doi":"10.3352/jeehp.2024.21.5","DOIUrl":"10.3352/jeehp.2024.21.5","url":null,"abstract":"<p><strong>Purpose: </strong>We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.</p><p><strong>Methods: </strong>This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.</p><p><strong>Results: </strong>Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.</p><p><strong>Conclusion: </strong>Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"5"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948917/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of virtual and in-person simulations for sepsis and trauma resuscitation training in Singapore: a randomized controlled trial 新加坡脓毒症和创伤复苏培训中虚拟和现场模拟的比较:随机对照试验。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-11-18 DOI: 10.3352/jeehp.2024.21.33
Matthew Jian Wen Low, Gene Wai Han Chan, Zisheng Li, Yiwen Koh, Chi Loong Jen, Zi Yao Lee, Lenard Tai Win Cheng
{"title":"Comparison of virtual and in-person simulations for sepsis and trauma resuscitation training in Singapore: a randomized controlled trial","authors":"Matthew Jian Wen Low, Gene Wai Han Chan, Zisheng Li, Yiwen Koh, Chi Loong Jen, Zi Yao Lee, Lenard Tai Win Cheng","doi":"10.3352/jeehp.2024.21.33","DOIUrl":"10.3352/jeehp.2024.21.33","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to compare cognitive, non-cognitive, and overall learning outcomes for sepsis and trauma resuscitation skills in novices with virtual patient simulation (VPS) versus in-person simulation (IPS).</p><p><strong>Methods: </strong>A randomized controlled trial was conducted on junior doctors in 1 emergency department from January to December 2022, comparing 70 minutes of VPS (n=19) versus IPS (n=21) in sepsis and trauma resuscitation. Using the nominal group technique, we created skills assessment checklists and determined Bloom’s taxonomy domains for each checklist item. Two blinded raters observed participants leading 1 sepsis and 1 trauma resuscitation simulation. Satisfaction was measured using the Student Satisfaction with Learning Scale (SSLS). The SSLS and checklist scores were analyzed using the Wilcoxon rank sum test and 2-tailed t-test respectively.</p><p><strong>Results: </strong>For sepsis, there was no significant difference between VPS and IPS in overall scores (2.0; 95% confidence interval [CI], -1.4 to 5.4; Cohen’s d=0.38), as well as in items that were cognitive (1.1; 95% CI, -1.5 to 3.7) and not only cognitive (0.9; 95% CI, -0.4 to 2.2). Likewise, for trauma, there was no significant difference in overall scores (-0.9; 95% CI, -4.1 to 2.3; Cohen’s d=0.19), as well as in items that were cognitive (-0.3; 95% CI, -2.8 to 2.1) and not only cognitive (-0.6; 95% CI, -2.4 to 1.3). The median SSLS scores were lower with VPS than with IPS (-3.0; 95% CI, -1.0 to -5.0).</p><p><strong>Conclusion: </strong>For novices, there were no major differences in overall and non-cognitive learning outcomes for sepsis and trauma resuscitation between VPS and IPS. Learners were more satisfied with IPS than with VPS (clinicaltrials.gov identifier: NCT05201950).</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"33"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11647267/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142648693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review 包括 ChatGPT 在内的大型语言模型在医学教育中的机遇、挑战和未来发展方向:系统性范围审查。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-03-15 DOI: 10.3352/jeehp.2024.21.6
Xiaojun Xu, Yixiao Chen, Jing Miao
{"title":"Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review","authors":"Xiaojun Xu, Yixiao Chen, Jing Miao","doi":"10.3352/jeehp.2024.21.6","DOIUrl":"10.3352/jeehp.2024.21.6","url":null,"abstract":"<p><strong>Background: </strong>ChatGPT is a large language model (LLM) based on artificial intelligence (AI) capable of responding in multiple languages and generating nuanced and highly complex responses. While ChatGPT holds promising applications in medical education, its limitations and potential risks cannot be ignored.</p><p><strong>Methods: </strong>A scoping review was conducted for English articles discussing ChatGPT in the context of medical education published after 2022. A literature search was performed using PubMed/MEDLINE, Embase, and Web of Science databases, and information was extracted from the relevant studies that were ultimately included.</p><p><strong>Results: </strong>ChatGPT exhibits various potential applications in medical education, such as providing personalized learning plans and materials, creating clinical practice simulation scenarios, and assisting in writing articles. However, challenges associated with academic integrity, data accuracy, and potential harm to learning were also highlighted in the literature. The paper emphasizes certain recommendations for using ChatGPT, including the establishment of guidelines. Based on the review, 3 key research areas were proposed: cultivating the ability of medical students to use ChatGPT correctly, integrating ChatGPT into teaching activities and processes, and proposing standards for the use of AI by medical students.</p><p><strong>Conclusion: </strong>ChatGPT has the potential to transform medical education, but careful consideration is required for its full integration. To harness the full potential of ChatGPT in medical education, attention should not only be given to the capabilities of AI but also to its impact on students and teachers.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"6"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11035906/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140132845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Presidential address 2024: the expansion of computer-based testing to numerous health professions licensing examinations in Korea, preparation of computer-based practical tests, and adoption of the medical metaverse. 2024 年总统演讲:将计算机辅助考试扩展到韩国众多卫生专业执照考试中,准备计算机辅助实践考试,并采用医学元网。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-02-20 DOI: 10.3352/jeehp.2024.21.2
Hyunjoo Pai
{"title":"Presidential address 2024: the expansion of computer-based testing to numerous health professions licensing examinations in Korea, preparation of computer-based practical tests, and adoption of the medical metaverse.","authors":"Hyunjoo Pai","doi":"10.3352/jeehp.2024.21.2","DOIUrl":"10.3352/jeehp.2024.21.2","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"2"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139906639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and validity evidence for the resident-led large group teaching assessment instrument in the United States: a methodological study. 美国住院医师主导的大型小组教学评估工具的开发和有效性证据:一项方法研究。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-02-23 DOI: 10.3352/jeehp.2024.21.3
Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-Pyng Chung
{"title":"Development and validity evidence for the resident-led large group teaching assessment instrument in the United States: a methodological study.","authors":"Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-Pyng Chung","doi":"10.3352/jeehp.2024.21.3","DOIUrl":"10.3352/jeehp.2024.21.3","url":null,"abstract":"<p><strong>Purpose: </strong>Despite educational mandates to assess resident teaching competence, limited instruments with validity evidence exist for this purpose. Existing instruments do not allow faculty to assess resident-led teaching in a large group format or whether teaching was interactive. This study gathers validity evidence on the use of the Resident-led Large Group Teaching Assessment Instrument (Relate), an instrument used by faculty to assess resident teaching competency. Relate comprises 23 behaviors divided into six elements: learning environment, goals and objectives, content of talk, promotion of understanding and retention, session management, and closure.</p><p><strong>Methods: </strong>Messick's unified validity framework was used for this study. Investigators used video recordings of resident-led teaching from three pediatric residency programs to develop Relate and a rater guidebook. Faculty were trained on instrument use through frame-of-reference training. Resident teaching at all sites was video-recorded during 2018-2019. Two trained faculty raters assessed each video. Descriptive statistics on performance were obtained. Validity evidence sources include: rater training effect (response process), reliability and variability (internal structure), and impact on Milestones assessment (relations to other variables).</p><p><strong>Results: </strong>Forty-eight videos, from 16 residents, were analyzed. Rater training improved inter-rater reliability from 0.04 to 0.64. The Φ-coefficient reliability was 0.50. There was a significant correlation between overall Relate performance and the pediatric teaching Milestone, r = 0.34, P = .019.</p><p><strong>Conclusion: </strong>Relate provides validity evidence with sufficient reliability to measure resident-led large-group teaching competence.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"3"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139933504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT (GPT-4) passed the Japanese National License Examination for Pharmacists in 2022, answering all items including those with diagrams: a descriptive study. ChatGPT (GPT-4) 在 2022 年通过了日本全国药剂师执照考试,回答了包括图表在内的所有题目:一项描述性研究。
IF 4.4
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-02-28 DOI: 10.3352/jeehp.2024.21.4
Hiroyasu Sato, Katsuhiko Ogasawara
{"title":"ChatGPT (GPT-4) passed the Japanese National License Examination for Pharmacists in 2022, answering all items including those with diagrams: a descriptive study.","authors":"Hiroyasu Sato, Katsuhiko Ogasawara","doi":"10.3352/jeehp.2024.21.4","DOIUrl":"10.3352/jeehp.2024.21.4","url":null,"abstract":"<p><strong>Purpose: </strong>The objective of this study was to assess the performance of ChatGPT (GPT-4) on all items, including those with diagrams, in the Japanese National License Examination for Pharmacists (JNLEP) and compare it with the previous GPT-3.5 model’s performance.</p><p><strong>Methods: </strong>The 107th JNLEP, conducted in 2022, with 344 items input into the GPT-4 model, was targeted for this study. Separately, 284 items, excluding those with diagrams, were entered into the GPT-3.5 model. The answers were categorized and analyzed to determine accuracy rates based on categories, subjects, and presence or absence of diagrams. The accuracy rates were compared to the main passing criteria (overall accuracy rate ≥62.9%).</p><p><strong>Results: </strong>The overall accuracy rate for all items in the 107th JNLEP in GPT-4 was 72.5%, successfully meeting all the passing criteria. For the set of items without diagrams, the accuracy rate was 80.0%, which was significantly higher than that of the GPT-3.5 model (43.5%). The GPT-4 model demonstrated an accuracy rate of 36.1% for items that included diagrams.</p><p><strong>Conclusion: </strong>Advancements that allow GPT-4 to process images have made it possible for LLMs to answer all items in medical-related license examinations. This study’s findings confirm that ChatGPT (GPT-4) possesses sufficient knowledge to meet the passing criteria.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"4"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Events related to medication errors and related factors involving nurses’ behavior to reduce medication errors in Japan: a Bayesian network modeling-based factor analysis and scenario analysis. 日本与用药错误有关的事件以及涉及护士减少用药错误行为的相关因素:基于贝叶斯网络建模的因素分析和情景分析。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-06-11 DOI: 10.3352/jeehp.2024.21.12
Naotaka Sugimura, Katsuhiko Ogasawara
{"title":"Events related to medication errors and related factors involving nurses’ behavior to reduce medication errors in Japan: a Bayesian network modeling-based factor analysis and scenario analysis.","authors":"Naotaka Sugimura, Katsuhiko Ogasawara","doi":"10.3352/jeehp.2024.21.12","DOIUrl":"10.3352/jeehp.2024.21.12","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to identify the relationships between medication errors and the factors affecting nurses’ knowledge and behavior in Japan using Bayesian network modeling. It also aimed to identify important factors through scenario analysis with consideration of nursing students’ and nurses’ education regarding patient safety and medications.</p><p><strong>Methods: </strong>We used mixed methods. First, error events related to medications and related factors were qualitatively extracted from 119 actual incident reports in 2022 from the database of the Japan Council for Quality Health Care. These events and factors were then quantitatively evaluated in a flow model using Bayesian network, and a scenario analysis was conducted to estimate the posterior probabilities of events when the prior probabilities of some factors were 0%.</p><p><strong>Results: </strong>There were 10 types of events related to medication errors. A 5-layer flow model was created using Bayesian network analysis. The scenario analysis revealed that “failure to confirm the 5 rights,” “unfamiliarity with operations of medications,” “insufficient knowledge of medications,” and “assumptions and forgetfulness” were factors that were significantly associated with the occurrence of medical errors.</p><p><strong>Conclusion: </strong>This study provided an estimate of the effects of mitigating nurses’ behavioral factors that trigger medication errors. The flow model itself can also be used as an educational tool to reflect on behavior when incidents occur. It is expected that patient safety education will be recognized as a major element of nursing education worldwide and that an integrated curriculum will be developed.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"12"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11223988/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141301850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum: Impact of a change from A-F grading to honors/pass/fail grading on academic performance at Yonsei University College of Medicine in Korea: a cross-sectional serial mediation analysis. 勘误:韩国延世大学医学院从 A-F 评级改为优/良/不及格评级对学习成绩的影响:横断面序列中介分析。
IF 9.3
Journal of Educational Evaluation for Health Professions Pub Date : 2024-01-01 Epub Date: 2024-11-26 DOI: 10.3352/jeehp.2024.21.35
{"title":"Erratum: Impact of a change from A-F grading to honors/pass/fail grading on academic performance at Yonsei University College of Medicine in Korea: a cross-sectional serial mediation analysis.","authors":"","doi":"10.3352/jeehp.2024.21.35","DOIUrl":"10.3352/jeehp.2024.21.35","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"35"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11637594/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信