利用人工智能提高药学客观结构化临床检查分级的效率、准确性和客观性的初步研究。

IF 3.8 4区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES
Mariette Sourial, Jeremy C Hagler
{"title":"利用人工智能提高药学客观结构化临床检查分级的效率、准确性和客观性的初步研究。","authors":"Mariette Sourial, Jeremy C Hagler","doi":"10.1016/j.ajpe.2025.101455","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>The goal of this project was to evaluate the feasibility of using artificial intelligence (AI) in grading pharmacy Objective Structured Clinical Examination (OSCE) analytical checklists in terms of accuracy, objectivity and consistency, and efficiency when compared to faculty evaluators.</p><p><strong>Methods: </strong>Third year pharmacy students (n=39) enrolled in a private Christian university completed a five station OSCE as part of the Advanced Pharmacy Practice Experience (APPE)-readiness plan. Audio recordings from two of the interactive stations were de-identified and fed into two customized language learning models: a speech-to-text, and tailored transformer model trained on the analytical checklist. A validation set using the analytical checklist was completed by the study investigator. Comparison of AI scoring of the analytical checklist against the validation set and faculty evaluators' scoring was retrospectively reviewed for analysis.</p><p><strong>Results: </strong>The customized AI model demonstrated greater than 95% and 93% accuracy for station A and B respectively. There was an observed statistically significant inter-rater variability among the faculty evaluators, with one evaluator scoring on average four points higher in one station, and another evaluator scoring on average one point higher in the second station. For efficiency, the AI model graded 39 students in less than five minutes, saving time for faculty grading, along with timely feedback to assist in improving future student performance.</p><p><strong>Conclusion: </strong>Customized AI model outperformed faculty scoring on the pharmacy OSCE analytical checklists of two stations in accuracy, objectivity and consistency, and efficiency.</p>","PeriodicalId":55530,"journal":{"name":"American Journal of Pharmaceutical Education","volume":" ","pages":"101455"},"PeriodicalIF":3.8000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Pilot Study Using Artificial Intelligence to Enhance Efficiency, Accuracy, and Objectivity in Grading Pharmacy Objective Structured Clinical Examinations.\",\"authors\":\"Mariette Sourial, Jeremy C Hagler\",\"doi\":\"10.1016/j.ajpe.2025.101455\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>The goal of this project was to evaluate the feasibility of using artificial intelligence (AI) in grading pharmacy Objective Structured Clinical Examination (OSCE) analytical checklists in terms of accuracy, objectivity and consistency, and efficiency when compared to faculty evaluators.</p><p><strong>Methods: </strong>Third year pharmacy students (n=39) enrolled in a private Christian university completed a five station OSCE as part of the Advanced Pharmacy Practice Experience (APPE)-readiness plan. Audio recordings from two of the interactive stations were de-identified and fed into two customized language learning models: a speech-to-text, and tailored transformer model trained on the analytical checklist. A validation set using the analytical checklist was completed by the study investigator. Comparison of AI scoring of the analytical checklist against the validation set and faculty evaluators' scoring was retrospectively reviewed for analysis.</p><p><strong>Results: </strong>The customized AI model demonstrated greater than 95% and 93% accuracy for station A and B respectively. There was an observed statistically significant inter-rater variability among the faculty evaluators, with one evaluator scoring on average four points higher in one station, and another evaluator scoring on average one point higher in the second station. For efficiency, the AI model graded 39 students in less than five minutes, saving time for faculty grading, along with timely feedback to assist in improving future student performance.</p><p><strong>Conclusion: </strong>Customized AI model outperformed faculty scoring on the pharmacy OSCE analytical checklists of two stations in accuracy, objectivity and consistency, and efficiency.</p>\",\"PeriodicalId\":55530,\"journal\":{\"name\":\"American Journal of Pharmaceutical Education\",\"volume\":\" \",\"pages\":\"101455\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"American Journal of Pharmaceutical Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://doi.org/10.1016/j.ajpe.2025.101455\",\"RegionNum\":4,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Pharmaceutical Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1016/j.ajpe.2025.101455","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

摘要

目的:本项目的目的是评估使用人工智能(AI)对药学客观结构化临床检查(OSCE)分析清单进行评分的可行性,并与教师评估人员进行比较,评估其准确性、客观性和一致性以及效率。方法:在一所私立基督教大学注册的三年级药学学生(n=39)完成了一个五站的OSCE,作为高级药学实践经验(APPE)准备计划的一部分。来自两个互动电台的音频记录被去识别并输入到两个定制的语言学习模型中:一个是语音到文本的转换模型,另一个是经过分析清单训练的定制转换模型。使用分析性检查表的验证集由研究研究者完成。将分析清单的AI评分与验证集和教师评估者的评分进行回顾性比较以进行分析。结果:自定义AI模型在A站和B站的准确率分别大于95%和93%。在教师评估者中,有统计上显著的差异,一个评估者在一个站点的平均得分高4分,另一个评估者在第二个站点的平均得分高1分。为了提高效率,人工智能模型在不到5分钟的时间内为39名学生评分,节省了教师评分的时间,并及时反馈,以帮助提高未来学生的表现。结论:定制化AI模型在准确性、客观性、一致性和效率上均优于两站药学OSCE分析清单的教师评分。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Pilot Study Using Artificial Intelligence to Enhance Efficiency, Accuracy, and Objectivity in Grading Pharmacy Objective Structured Clinical Examinations.

Objective: The goal of this project was to evaluate the feasibility of using artificial intelligence (AI) in grading pharmacy Objective Structured Clinical Examination (OSCE) analytical checklists in terms of accuracy, objectivity and consistency, and efficiency when compared to faculty evaluators.

Methods: Third year pharmacy students (n=39) enrolled in a private Christian university completed a five station OSCE as part of the Advanced Pharmacy Practice Experience (APPE)-readiness plan. Audio recordings from two of the interactive stations were de-identified and fed into two customized language learning models: a speech-to-text, and tailored transformer model trained on the analytical checklist. A validation set using the analytical checklist was completed by the study investigator. Comparison of AI scoring of the analytical checklist against the validation set and faculty evaluators' scoring was retrospectively reviewed for analysis.

Results: The customized AI model demonstrated greater than 95% and 93% accuracy for station A and B respectively. There was an observed statistically significant inter-rater variability among the faculty evaluators, with one evaluator scoring on average four points higher in one station, and another evaluator scoring on average one point higher in the second station. For efficiency, the AI model graded 39 students in less than five minutes, saving time for faculty grading, along with timely feedback to assist in improving future student performance.

Conclusion: Customized AI model outperformed faculty scoring on the pharmacy OSCE analytical checklists of two stations in accuracy, objectivity and consistency, and efficiency.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.30
自引率
15.20%
发文量
114
期刊介绍: The Journal accepts unsolicited manuscripts that have not been published and are not under consideration for publication elsewhere. The Journal only considers material related to pharmaceutical education for publication. Authors must prepare manuscripts to conform to the Journal style (Author Instructions). All manuscripts are subject to peer review and approval by the editor prior to acceptance for publication. Reviewers are assigned by the editor with the advice of the editorial board as needed. Manuscripts are submitted and processed online (Submit a Manuscript) using Editorial Manager, an online manuscript tracking system that facilitates communication between the editorial office, editor, associate editors, reviewers, and authors. After a manuscript is accepted, it is scheduled for publication in an upcoming issue of the Journal. All manuscripts are formatted and copyedited, and returned to the author for review and approval of the changes. Approximately 2 weeks prior to publication, the author receives an electronic proof of the article for final review and approval. Authors are not assessed page charges for publication.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信