比较药物信息中心和 ChatGPT 对药物信息问题的回答。

IF 2.1 4区 医学 Q3 PHARMACOLOGY & PHARMACY
Samantha Triplett, Genevieve Lynn Ness Engle, Erin M Behnen
{"title":"比较药物信息中心和 ChatGPT 对药物信息问题的回答。","authors":"Samantha Triplett, Genevieve Lynn Ness Engle, Erin M Behnen","doi":"10.1093/ajhp/zxae316","DOIUrl":null,"url":null,"abstract":"<p><strong>Disclaimer: </strong>In an effort to expedite the publication of articles, AJHP is posting manuscripts online as soon as possible after acceptance. Accepted manuscripts have been peer-reviewed and copyedited, but are posted online before technical formatting and author proofing. These manuscripts are not the final version of record and will be replaced with the final article (formatted per AJHP style and proofed by the authors) at a later time.</p><p><strong>Purpose: </strong>A study was conducted to assess the accuracy and ability of Chat Generative Pre-trained Transformer (ChatGPT) to systematically respond to drug information inquiries relative to responses of a drug information center (DIC).</p><p><strong>Methods: </strong>Ten drug information questions answered by the DIC in 2022 or 2023 were selected for analysis. Three pharmacists created new ChatGPT accounts and submitted each question to ChatGPT at the same time. Each question was submitted twice to identify consistency in responses. Two days later, the same process was conducted by a fourth pharmacist. Phase 1 of data analysis consisted of a drug information pharmacist assessing all 84 ChatGPT responses for accuracy relative to the DIC responses. In phase 2, 10 ChatGPT responses were selected to be assessed by 3 blinded reviewers. Reviewers utilized an 8-question predetermined rubric to evaluate the ChatGPT and DIC responses.</p><p><strong>Results: </strong>When comparing the ChatGPT responses (n = 84) to the DIC responses, ChatGPT had an overall accuracy rate of 50%. Accuracy across the different question types varied. In regards to the overall blinded score, ChatGPT responses scored higher than the responses by the DIC according to the rubric (overall scores of 67.5% and 55.0%, respectively). The DIC responses scored higher in the categories of references mentioned and references identified.</p><p><strong>Conclusion: </strong>Responses generated by ChatGPT have been found to be better than those created by a DIC in clarity and readability; however, the accuracy of ChatGPT responses was lacking. ChatGPT responses to drug information questions would need to be carefully reviewed for accuracy and completeness.</p>","PeriodicalId":7577,"journal":{"name":"American Journal of Health-System Pharmacy","volume":null,"pages":null},"PeriodicalIF":2.1000,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A comparison of drug information question responses by a drug information center and by ChatGPT.\",\"authors\":\"Samantha Triplett, Genevieve Lynn Ness Engle, Erin M Behnen\",\"doi\":\"10.1093/ajhp/zxae316\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Disclaimer: </strong>In an effort to expedite the publication of articles, AJHP is posting manuscripts online as soon as possible after acceptance. Accepted manuscripts have been peer-reviewed and copyedited, but are posted online before technical formatting and author proofing. These manuscripts are not the final version of record and will be replaced with the final article (formatted per AJHP style and proofed by the authors) at a later time.</p><p><strong>Purpose: </strong>A study was conducted to assess the accuracy and ability of Chat Generative Pre-trained Transformer (ChatGPT) to systematically respond to drug information inquiries relative to responses of a drug information center (DIC).</p><p><strong>Methods: </strong>Ten drug information questions answered by the DIC in 2022 or 2023 were selected for analysis. Three pharmacists created new ChatGPT accounts and submitted each question to ChatGPT at the same time. Each question was submitted twice to identify consistency in responses. Two days later, the same process was conducted by a fourth pharmacist. Phase 1 of data analysis consisted of a drug information pharmacist assessing all 84 ChatGPT responses for accuracy relative to the DIC responses. In phase 2, 10 ChatGPT responses were selected to be assessed by 3 blinded reviewers. Reviewers utilized an 8-question predetermined rubric to evaluate the ChatGPT and DIC responses.</p><p><strong>Results: </strong>When comparing the ChatGPT responses (n = 84) to the DIC responses, ChatGPT had an overall accuracy rate of 50%. Accuracy across the different question types varied. In regards to the overall blinded score, ChatGPT responses scored higher than the responses by the DIC according to the rubric (overall scores of 67.5% and 55.0%, respectively). The DIC responses scored higher in the categories of references mentioned and references identified.</p><p><strong>Conclusion: </strong>Responses generated by ChatGPT have been found to be better than those created by a DIC in clarity and readability; however, the accuracy of ChatGPT responses was lacking. ChatGPT responses to drug information questions would need to be carefully reviewed for accuracy and completeness.</p>\",\"PeriodicalId\":7577,\"journal\":{\"name\":\"American Journal of Health-System Pharmacy\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"American Journal of Health-System Pharmacy\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1093/ajhp/zxae316\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PHARMACOLOGY & PHARMACY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Health-System Pharmacy","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/ajhp/zxae316","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PHARMACOLOGY & PHARMACY","Score":null,"Total":0}
引用次数: 0

摘要

免责声明:为了加快文章的发表,AJHP在接受稿件后会尽快将其发布到网上。被录用的稿件已经过同行评审和校对,但在进行技术格式化和作者校对之前会在网上发布。这些稿件不是最终记录版本,稍后将被最终文章(按 AJHP 风格格式化并由作者校对)取代。目的:我们进行了一项研究,以评估 Chat Generative Pre-trained Transformer (ChatGPT) 与药物信息中心 (DIC) 的回复相比,系统回复药物信息咨询的准确性和能力:选取了 DIC 在 2022 年或 2023 年回答的 10 个药物信息问题进行分析。三名药剂师创建了新的 ChatGPT 账户,并同时向 ChatGPT 提交了每个问题。每个问题都提交两次,以确定回答的一致性。两天后,第四位药剂师进行了同样的操作。数据分析的第 1 阶段由药物信息药剂师评估所有 84 个 ChatGPT 回答与 DIC 回答的准确性。在第 2 阶段,选出 10 个 ChatGPT 回复由 3 位盲审人员进行评估。审核人员使用预先确定的 8 个问题的评分标准来评估 ChatGPT 和 DIC 回复:将 ChatGPT 回答(n = 84)与 DIC 回答进行比较,ChatGPT 的总体准确率为 50%。不同问题类型的准确率各不相同。在盲测总分方面,根据评分标准,ChatGPT 的答题得分高于 DIC 的答题得分(总分分别为 67.5% 和 55.0%)。在提及参考文献和确定参考文献这两个类别中,DIC 的答复得分更高:结论:通过 ChatGPT 生成的回答在清晰度和可读性方面优于 DIC 生成的回答;但 ChatGPT 回答的准确性不足。需要对 ChatGPT 回答药物信息问题的准确性和完整性进行仔细审核。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A comparison of drug information question responses by a drug information center and by ChatGPT.

Disclaimer: In an effort to expedite the publication of articles, AJHP is posting manuscripts online as soon as possible after acceptance. Accepted manuscripts have been peer-reviewed and copyedited, but are posted online before technical formatting and author proofing. These manuscripts are not the final version of record and will be replaced with the final article (formatted per AJHP style and proofed by the authors) at a later time.

Purpose: A study was conducted to assess the accuracy and ability of Chat Generative Pre-trained Transformer (ChatGPT) to systematically respond to drug information inquiries relative to responses of a drug information center (DIC).

Methods: Ten drug information questions answered by the DIC in 2022 or 2023 were selected for analysis. Three pharmacists created new ChatGPT accounts and submitted each question to ChatGPT at the same time. Each question was submitted twice to identify consistency in responses. Two days later, the same process was conducted by a fourth pharmacist. Phase 1 of data analysis consisted of a drug information pharmacist assessing all 84 ChatGPT responses for accuracy relative to the DIC responses. In phase 2, 10 ChatGPT responses were selected to be assessed by 3 blinded reviewers. Reviewers utilized an 8-question predetermined rubric to evaluate the ChatGPT and DIC responses.

Results: When comparing the ChatGPT responses (n = 84) to the DIC responses, ChatGPT had an overall accuracy rate of 50%. Accuracy across the different question types varied. In regards to the overall blinded score, ChatGPT responses scored higher than the responses by the DIC according to the rubric (overall scores of 67.5% and 55.0%, respectively). The DIC responses scored higher in the categories of references mentioned and references identified.

Conclusion: Responses generated by ChatGPT have been found to be better than those created by a DIC in clarity and readability; however, the accuracy of ChatGPT responses was lacking. ChatGPT responses to drug information questions would need to be carefully reviewed for accuracy and completeness.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.90
自引率
18.50%
发文量
341
审稿时长
3-8 weeks
期刊介绍: The American Journal of Health-System Pharmacy (AJHP) is the official publication of the American Society of Health-System Pharmacists (ASHP). It publishes peer-reviewed scientific papers on contemporary drug therapy and pharmacy practice innovations in hospitals and health systems. With a circulation of more than 43,000, AJHP is the most widely recognized and respected clinical pharmacy journal in the world.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信