{"title":"将 ChatGPT 作为药学实践中回答临床问题的工具进行评估。","authors":"Faria Munir, Anna Gehres, David Wai, Leah Song","doi":"10.1177/08971900241256731","DOIUrl":null,"url":null,"abstract":"<p><p><b>Background:</b> In the healthcare field, there has been a growing interest in using artificial intelligence (AI)-powered tools to assist healthcare professionals, including pharmacists, in their daily tasks. <b>Objectives:</b> To provide commentary and insight into the potential for generative AI language models such as ChatGPT as a tool for answering practice-based, clinical questions and the challenges that need to be addressed before implementation in pharmacy practice settings. <b>Methods:</b> To assess ChatGPT, pharmacy-based questions were prompted to ChatGPT (Version 3.5; free version) and responses were recorded. Question types included 6 drug information questions, 6 enhanced prompt drug information questions, 5 patient case questions, 5 calculations questions, and 10 drug knowledge questions (e.g., top 200 drugs). After all responses were collected, ChatGPT responses were assessed for appropriateness. <b>Results:</b> ChatGPT responses were generated from 32 questions in 5 categories and evaluated on a total of 44 possible points. Among all ChatGPT responses and categories, the overall score was 21 of 44 points (47.73%). ChatGPT scored higher in pharmacy calculation (100%), drug information (83%), and top 200 drugs (80%) categories and lower in drug information enhanced prompt (33%) and patient case (20%) categories. <b>Conclusion:</b> This study suggests that ChatGPT has limited success as a tool to answer pharmacy-based questions. ChatGPT scored higher in calculation and multiple-choice questions but scored lower in drug information and patient case questions, generating misleading or fictional answers and citations.</p>","PeriodicalId":16818,"journal":{"name":"Journal of pharmacy practice","volume":" ","pages":"1303-1310"},"PeriodicalIF":1.0000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluation of ChatGPT as a Tool for Answering Clinical Questions in Pharmacy Practice.\",\"authors\":\"Faria Munir, Anna Gehres, David Wai, Leah Song\",\"doi\":\"10.1177/08971900241256731\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><b>Background:</b> In the healthcare field, there has been a growing interest in using artificial intelligence (AI)-powered tools to assist healthcare professionals, including pharmacists, in their daily tasks. <b>Objectives:</b> To provide commentary and insight into the potential for generative AI language models such as ChatGPT as a tool for answering practice-based, clinical questions and the challenges that need to be addressed before implementation in pharmacy practice settings. <b>Methods:</b> To assess ChatGPT, pharmacy-based questions were prompted to ChatGPT (Version 3.5; free version) and responses were recorded. Question types included 6 drug information questions, 6 enhanced prompt drug information questions, 5 patient case questions, 5 calculations questions, and 10 drug knowledge questions (e.g., top 200 drugs). After all responses were collected, ChatGPT responses were assessed for appropriateness. <b>Results:</b> ChatGPT responses were generated from 32 questions in 5 categories and evaluated on a total of 44 possible points. Among all ChatGPT responses and categories, the overall score was 21 of 44 points (47.73%). ChatGPT scored higher in pharmacy calculation (100%), drug information (83%), and top 200 drugs (80%) categories and lower in drug information enhanced prompt (33%) and patient case (20%) categories. <b>Conclusion:</b> This study suggests that ChatGPT has limited success as a tool to answer pharmacy-based questions. ChatGPT scored higher in calculation and multiple-choice questions but scored lower in drug information and patient case questions, generating misleading or fictional answers and citations.</p>\",\"PeriodicalId\":16818,\"journal\":{\"name\":\"Journal of pharmacy practice\",\"volume\":\" \",\"pages\":\"1303-1310\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2024-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of pharmacy practice\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/08971900241256731\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/5/22 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q4\",\"JCRName\":\"PHARMACOLOGY & PHARMACY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of pharmacy practice","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/08971900241256731","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/5/22 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"PHARMACOLOGY & PHARMACY","Score":null,"Total":0}
Evaluation of ChatGPT as a Tool for Answering Clinical Questions in Pharmacy Practice.
Background: In the healthcare field, there has been a growing interest in using artificial intelligence (AI)-powered tools to assist healthcare professionals, including pharmacists, in their daily tasks. Objectives: To provide commentary and insight into the potential for generative AI language models such as ChatGPT as a tool for answering practice-based, clinical questions and the challenges that need to be addressed before implementation in pharmacy practice settings. Methods: To assess ChatGPT, pharmacy-based questions were prompted to ChatGPT (Version 3.5; free version) and responses were recorded. Question types included 6 drug information questions, 6 enhanced prompt drug information questions, 5 patient case questions, 5 calculations questions, and 10 drug knowledge questions (e.g., top 200 drugs). After all responses were collected, ChatGPT responses were assessed for appropriateness. Results: ChatGPT responses were generated from 32 questions in 5 categories and evaluated on a total of 44 possible points. Among all ChatGPT responses and categories, the overall score was 21 of 44 points (47.73%). ChatGPT scored higher in pharmacy calculation (100%), drug information (83%), and top 200 drugs (80%) categories and lower in drug information enhanced prompt (33%) and patient case (20%) categories. Conclusion: This study suggests that ChatGPT has limited success as a tool to answer pharmacy-based questions. ChatGPT scored higher in calculation and multiple-choice questions but scored lower in drug information and patient case questions, generating misleading or fictional answers and citations.
期刊介绍:
The Journal of Pharmacy Practice offers the practicing pharmacist topical, important, and useful information to support pharmacy practice and pharmaceutical care and expand the pharmacist"s professional horizons. The journal is presented in a single-topic, scholarly review format. Guest editors are selected for expertise in the subject area, who then recruit contributors from that practice or topic area.