{"title":"ChatGPT and Factual Knowledge Questions Regarding Clinical Pharmacy: Response to Letter to the Editor","authors":"Merel van Nuland PharmD, PhD","doi":"10.1002/jcph.2481","DOIUrl":null,"url":null,"abstract":"<p>Dear Editor,</p><p>The discourse surrounding the article titled “Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy” warrants further examination and critique. The study undertook an evaluation of ChatGPT's efficacy in responding to factual knowledge questions concerning clinical pharmacy. Through a series of 264 questions, ChatGPT's responses were analyzed for accuracy, consistency, quality of the substantiation, and reproducibility, yielding notable results. ChatGPT demonstrated a 79% correctness rate, surpassing the 66% accuracy rate of pharmacists.</p><p>Acknowledging the limitations outlined in the discussion section, it is important to note that this study solely focused on factual knowledge questions. The primary objective was to determine ChatGPT's performance in responding to factual knowledge questions rather than its proficiency in clinical reasoning. Consequently, the study refrained from drawing conclusions regarding ChatGPT's impact on clinical decision-making, as this aspect falls under the scope of separate research endeavors.<span><sup>1</sup></span></p><p>Addressing the limitations, we argue that the scale of 264 questions, and a lack of variety are limitations of this study. The number of questions aligns with similar studies such as the USMLE Step 1, comprising 280 questions,<span><sup>2</sup></span> and the Taiwanese pharmacist licensing examination, consisting of 431 questions.<span><sup>3</sup></span> Additionally, the span of topics covered in our questions is deemed representative of a pharmacist's factual knowledge base within clinical pharmacy.</p><p>The authors acknowledge the need for further investigation into ChatGPT's clinical applicability, for example, with longitudinal studies. Furthermore, exploring ChatGPT's capacity to provide justifications and explanations for its responses could augment its efficacy in aiding pharmacist decision-making processes. Continuous refinement and augmentation of ChatGPT are essential to strengthen its functionality as a tool for pharmacists in the clinic. Still, the indispensable expertise and interpretive skills of clinical pharmacists is pivotal to applying this information in the clinic. The factual information produced by ChatGPT holds potential as a valuable resource, however, it is imperative that the responses undergo rigorous assessment for accuracy and clinical applicability under the scrutiny of clinical pharmacists.</p><p>Sincerely,</p><p>Merel van Nuland</p>","PeriodicalId":22751,"journal":{"name":"The Journal of Clinical Pharmacology","volume":"64 9","pages":"1186"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jcph.2481","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Clinical Pharmacology","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/jcph.2481","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Dear Editor,
The discourse surrounding the article titled “Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy” warrants further examination and critique. The study undertook an evaluation of ChatGPT's efficacy in responding to factual knowledge questions concerning clinical pharmacy. Through a series of 264 questions, ChatGPT's responses were analyzed for accuracy, consistency, quality of the substantiation, and reproducibility, yielding notable results. ChatGPT demonstrated a 79% correctness rate, surpassing the 66% accuracy rate of pharmacists.
Acknowledging the limitations outlined in the discussion section, it is important to note that this study solely focused on factual knowledge questions. The primary objective was to determine ChatGPT's performance in responding to factual knowledge questions rather than its proficiency in clinical reasoning. Consequently, the study refrained from drawing conclusions regarding ChatGPT's impact on clinical decision-making, as this aspect falls under the scope of separate research endeavors.1
Addressing the limitations, we argue that the scale of 264 questions, and a lack of variety are limitations of this study. The number of questions aligns with similar studies such as the USMLE Step 1, comprising 280 questions,2 and the Taiwanese pharmacist licensing examination, consisting of 431 questions.3 Additionally, the span of topics covered in our questions is deemed representative of a pharmacist's factual knowledge base within clinical pharmacy.
The authors acknowledge the need for further investigation into ChatGPT's clinical applicability, for example, with longitudinal studies. Furthermore, exploring ChatGPT's capacity to provide justifications and explanations for its responses could augment its efficacy in aiding pharmacist decision-making processes. Continuous refinement and augmentation of ChatGPT are essential to strengthen its functionality as a tool for pharmacists in the clinic. Still, the indispensable expertise and interpretive skills of clinical pharmacists is pivotal to applying this information in the clinic. The factual information produced by ChatGPT holds potential as a valuable resource, however, it is imperative that the responses undergo rigorous assessment for accuracy and clinical applicability under the scrutiny of clinical pharmacists.