{"title":"利用人工智能提高药学客观结构化临床检查分级的效率、准确性和客观性的初步研究。","authors":"Mariette Sourial, Jeremy C Hagler","doi":"10.1016/j.ajpe.2025.101455","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>The goal of this project was to evaluate the feasibility of using artificial intelligence (AI) in grading pharmacy Objective Structured Clinical Examination (OSCE) analytical checklists in terms of accuracy, objectivity and consistency, and efficiency when compared to faculty evaluators.</p><p><strong>Methods: </strong>Third year pharmacy students (n=39) enrolled in a private Christian university completed a five station OSCE as part of the Advanced Pharmacy Practice Experience (APPE)-readiness plan. Audio recordings from two of the interactive stations were de-identified and fed into two customized language learning models: a speech-to-text, and tailored transformer model trained on the analytical checklist. A validation set using the analytical checklist was completed by the study investigator. Comparison of AI scoring of the analytical checklist against the validation set and faculty evaluators' scoring was retrospectively reviewed for analysis.</p><p><strong>Results: </strong>The customized AI model demonstrated greater than 95% and 93% accuracy for station A and B respectively. There was an observed statistically significant inter-rater variability among the faculty evaluators, with one evaluator scoring on average four points higher in one station, and another evaluator scoring on average one point higher in the second station. For efficiency, the AI model graded 39 students in less than five minutes, saving time for faculty grading, along with timely feedback to assist in improving future student performance.</p><p><strong>Conclusion: </strong>Customized AI model outperformed faculty scoring on the pharmacy OSCE analytical checklists of two stations in accuracy, objectivity and consistency, and efficiency.</p>","PeriodicalId":55530,"journal":{"name":"American Journal of Pharmaceutical Education","volume":" ","pages":"101455"},"PeriodicalIF":3.8000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Pilot Study Using Artificial Intelligence to Enhance Efficiency, Accuracy, and Objectivity in Grading Pharmacy Objective Structured Clinical Examinations.\",\"authors\":\"Mariette Sourial, Jeremy C Hagler\",\"doi\":\"10.1016/j.ajpe.2025.101455\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>The goal of this project was to evaluate the feasibility of using artificial intelligence (AI) in grading pharmacy Objective Structured Clinical Examination (OSCE) analytical checklists in terms of accuracy, objectivity and consistency, and efficiency when compared to faculty evaluators.</p><p><strong>Methods: </strong>Third year pharmacy students (n=39) enrolled in a private Christian university completed a five station OSCE as part of the Advanced Pharmacy Practice Experience (APPE)-readiness plan. Audio recordings from two of the interactive stations were de-identified and fed into two customized language learning models: a speech-to-text, and tailored transformer model trained on the analytical checklist. A validation set using the analytical checklist was completed by the study investigator. Comparison of AI scoring of the analytical checklist against the validation set and faculty evaluators' scoring was retrospectively reviewed for analysis.</p><p><strong>Results: </strong>The customized AI model demonstrated greater than 95% and 93% accuracy for station A and B respectively. There was an observed statistically significant inter-rater variability among the faculty evaluators, with one evaluator scoring on average four points higher in one station, and another evaluator scoring on average one point higher in the second station. For efficiency, the AI model graded 39 students in less than five minutes, saving time for faculty grading, along with timely feedback to assist in improving future student performance.</p><p><strong>Conclusion: </strong>Customized AI model outperformed faculty scoring on the pharmacy OSCE analytical checklists of two stations in accuracy, objectivity and consistency, and efficiency.</p>\",\"PeriodicalId\":55530,\"journal\":{\"name\":\"American Journal of Pharmaceutical Education\",\"volume\":\" \",\"pages\":\"101455\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"American Journal of Pharmaceutical Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://doi.org/10.1016/j.ajpe.2025.101455\",\"RegionNum\":4,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Pharmaceutical Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1016/j.ajpe.2025.101455","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
A Pilot Study Using Artificial Intelligence to Enhance Efficiency, Accuracy, and Objectivity in Grading Pharmacy Objective Structured Clinical Examinations.
Objective: The goal of this project was to evaluate the feasibility of using artificial intelligence (AI) in grading pharmacy Objective Structured Clinical Examination (OSCE) analytical checklists in terms of accuracy, objectivity and consistency, and efficiency when compared to faculty evaluators.
Methods: Third year pharmacy students (n=39) enrolled in a private Christian university completed a five station OSCE as part of the Advanced Pharmacy Practice Experience (APPE)-readiness plan. Audio recordings from two of the interactive stations were de-identified and fed into two customized language learning models: a speech-to-text, and tailored transformer model trained on the analytical checklist. A validation set using the analytical checklist was completed by the study investigator. Comparison of AI scoring of the analytical checklist against the validation set and faculty evaluators' scoring was retrospectively reviewed for analysis.
Results: The customized AI model demonstrated greater than 95% and 93% accuracy for station A and B respectively. There was an observed statistically significant inter-rater variability among the faculty evaluators, with one evaluator scoring on average four points higher in one station, and another evaluator scoring on average one point higher in the second station. For efficiency, the AI model graded 39 students in less than five minutes, saving time for faculty grading, along with timely feedback to assist in improving future student performance.
Conclusion: Customized AI model outperformed faculty scoring on the pharmacy OSCE analytical checklists of two stations in accuracy, objectivity and consistency, and efficiency.
期刊介绍:
The Journal accepts unsolicited manuscripts that have not been published and are not under consideration for publication elsewhere. The Journal only considers material related to pharmaceutical education for publication. Authors must prepare manuscripts to conform to the Journal style (Author Instructions). All manuscripts are subject to peer review and approval by the editor prior to acceptance for publication. Reviewers are assigned by the editor with the advice of the editorial board as needed. Manuscripts are submitted and processed online (Submit a Manuscript) using Editorial Manager, an online manuscript tracking system that facilitates communication between the editorial office, editor, associate editors, reviewers, and authors.
After a manuscript is accepted, it is scheduled for publication in an upcoming issue of the Journal. All manuscripts are formatted and copyedited, and returned to the author for review and approval of the changes. Approximately 2 weeks prior to publication, the author receives an electronic proof of the article for final review and approval. Authors are not assessed page charges for publication.