Mikaila T Lane, Toluwalase A Ajayi, Kyle P Edmonds, Rabia S Atayee
{"title":"Evaluating the Clinical Reasoning of Generative AI in Palliative Care: A Comparison with Five Years of Pharmacy Learners.","authors":"Mikaila T Lane, Toluwalase A Ajayi, Kyle P Edmonds, Rabia S Atayee","doi":"10.1177/10966218251376436","DOIUrl":null,"url":null,"abstract":"<p><p><b><i>Context:</i></b> Artificial intelligence (AI), particularly large language models (LLMs), offers the potential to augment clinical decision-making, including in palliative care pharmacy, where personalized treatment and assessments are important. Despite the growing interest in AI, its role in clinical reasoning within specialized fields such as palliative care remains uncertain. <b><i>Objectives:</i></b> This study examines the performance of four commercial-grade LLMs on a Script Concordance Test (SCT) designed for pharmacy students in a pain and palliative care elective, comparing AI outputs with human learners' performance at baseline. <b><i>Methods:</i></b> Pharmacy students from 2018 to 2023 completed an SCT consisting of 16 clinical questions. Four LLMs (ChatGPT 3.5, ChatGPT 4.0, Gemini, and Gemini Advanced) were tested using the same SCT, with their responses compared to student performance. <b><i>Results:</i></b> The average score for LLMs (0.43) was slightly lower than that of students (0.47), but this difference was not statistically significant (<i>p</i> = 0.55). ChatGPT 4.0 achieved the highest score (0.57). <b><i>Conclusions:</i></b> While LLMs show potential for augmenting clinical decision-making, their limitations in patient-centered care highlight the necessity of human oversight and reinforce that they cannot replace human expertise in palliative care. This study was conducted in a controlled research setting, where LLMs were prompted to answer clinical reasoning questions despite default safety restrictions. However, this does not imply that such prompts should be used in practice. Future research should explore alternative methods for assessing AI decision-making without overriding safety mechanisms and focus on refining AI to better align with complex clinical reasoning. In addition, further studies are needed to confirm AI's comparative effectiveness, given the sample size limitations.</p>","PeriodicalId":16656,"journal":{"name":"Journal of palliative medicine","volume":" ","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of palliative medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/10966218251376436","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Context: Artificial intelligence (AI), particularly large language models (LLMs), offers the potential to augment clinical decision-making, including in palliative care pharmacy, where personalized treatment and assessments are important. Despite the growing interest in AI, its role in clinical reasoning within specialized fields such as palliative care remains uncertain. Objectives: This study examines the performance of four commercial-grade LLMs on a Script Concordance Test (SCT) designed for pharmacy students in a pain and palliative care elective, comparing AI outputs with human learners' performance at baseline. Methods: Pharmacy students from 2018 to 2023 completed an SCT consisting of 16 clinical questions. Four LLMs (ChatGPT 3.5, ChatGPT 4.0, Gemini, and Gemini Advanced) were tested using the same SCT, with their responses compared to student performance. Results: The average score for LLMs (0.43) was slightly lower than that of students (0.47), but this difference was not statistically significant (p = 0.55). ChatGPT 4.0 achieved the highest score (0.57). Conclusions: While LLMs show potential for augmenting clinical decision-making, their limitations in patient-centered care highlight the necessity of human oversight and reinforce that they cannot replace human expertise in palliative care. This study was conducted in a controlled research setting, where LLMs were prompted to answer clinical reasoning questions despite default safety restrictions. However, this does not imply that such prompts should be used in practice. Future research should explore alternative methods for assessing AI decision-making without overriding safety mechanisms and focus on refining AI to better align with complex clinical reasoning. In addition, further studies are needed to confirm AI's comparative effectiveness, given the sample size limitations.
期刊介绍:
Journal of Palliative Medicine is the premier peer-reviewed journal covering medical, psychosocial, policy, and legal issues in end-of-life care and relief of suffering for patients with intractable pain. The Journal presents essential information for professionals in hospice/palliative medicine, focusing on improving quality of life for patients and their families, and the latest developments in drug and non-drug treatments.
The companion biweekly eNewsletter, Briefings in Palliative Medicine, delivers the latest breaking news and information to keep clinicians and health care providers continuously updated.