Amoreena Most, Aaron Chase, Steven Xu, Tanner Hedrick, Brian Murray, Kelli Keats, Susan Smith, Erin Barreto, Tianming Liu, Andrea Sikora
{"title":"复杂用药方案的大语言模型管理:基于案例的评估","authors":"Amoreena Most, Aaron Chase, Steven Xu, Tanner Hedrick, Brian Murray, Kelli Keats, Susan Smith, Erin Barreto, Tianming Liu, Andrea Sikora","doi":"10.1101/2024.07.03.24309889","DOIUrl":null,"url":null,"abstract":"Background: Large language models (LLMs) have shown capability in diagnosing complex medical cases and passing medical licensing exams, but to date, only limited evaluations have studied how LLMs interpret, analyze, and optimize complex medication regimens. The purpose of this evaluation was to test four LLMs ability to identify medication errors and appropriate medication interventions on complex patient cases from the intensive care unit (ICU). Methods: A series of eight patient cases were developed by critical care pharmacists including history of present illness, laboratory values, vital signs, and medication regimens. Then, four LLMs (ChatGPT (GPT-3.5), ChatGPT (GPT-4), Claude2, and Llama2-7b) were prompted to develop a medication regimen for the patient. LLM generated medication regimens were then reviewed by a panel of seven critical care pharmacists to assess for presence of medication errors and clinical relevance. For each medication regimen recommended by the LLM, clinicians were asked to assess for if they would continue a medication, identify perceived medication errors in the medications recommended, identify the presence of life-threatening medication choices, and rank overall agreement on a 5-point Likert scale. Results: The clinician panel rated to continue therapies recommended by the LLMs between 55.8-67.9% of the time. Clinicians perceived between 1.57-4.29 medication errors per recommended regimen, and life-threatening recommendations were present between 15.0-55.3% of the time. Level agreement was between 1.85-2.67 for the four LLMs. Conclusions: LLMs demonstrated potential to serve as clinical decision support for the management of complex medication regimens with further domain specific training; however, caution should be used when employing LLMs for medication management given the present capabilities.","PeriodicalId":501447,"journal":{"name":"medRxiv - Pharmacology and Therapeutics","volume":"10 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Large language models management of complex medication regimens: a case-based evaluation\",\"authors\":\"Amoreena Most, Aaron Chase, Steven Xu, Tanner Hedrick, Brian Murray, Kelli Keats, Susan Smith, Erin Barreto, Tianming Liu, Andrea Sikora\",\"doi\":\"10.1101/2024.07.03.24309889\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background: Large language models (LLMs) have shown capability in diagnosing complex medical cases and passing medical licensing exams, but to date, only limited evaluations have studied how LLMs interpret, analyze, and optimize complex medication regimens. The purpose of this evaluation was to test four LLMs ability to identify medication errors and appropriate medication interventions on complex patient cases from the intensive care unit (ICU). Methods: A series of eight patient cases were developed by critical care pharmacists including history of present illness, laboratory values, vital signs, and medication regimens. Then, four LLMs (ChatGPT (GPT-3.5), ChatGPT (GPT-4), Claude2, and Llama2-7b) were prompted to develop a medication regimen for the patient. LLM generated medication regimens were then reviewed by a panel of seven critical care pharmacists to assess for presence of medication errors and clinical relevance. For each medication regimen recommended by the LLM, clinicians were asked to assess for if they would continue a medication, identify perceived medication errors in the medications recommended, identify the presence of life-threatening medication choices, and rank overall agreement on a 5-point Likert scale. Results: The clinician panel rated to continue therapies recommended by the LLMs between 55.8-67.9% of the time. Clinicians perceived between 1.57-4.29 medication errors per recommended regimen, and life-threatening recommendations were present between 15.0-55.3% of the time. Level agreement was between 1.85-2.67 for the four LLMs. Conclusions: LLMs demonstrated potential to serve as clinical decision support for the management of complex medication regimens with further domain specific training; however, caution should be used when employing LLMs for medication management given the present capabilities.\",\"PeriodicalId\":501447,\"journal\":{\"name\":\"medRxiv - Pharmacology and Therapeutics\",\"volume\":\"10 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"medRxiv - Pharmacology and Therapeutics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.07.03.24309889\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Pharmacology and Therapeutics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.07.03.24309889","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Large language models management of complex medication regimens: a case-based evaluation
Background: Large language models (LLMs) have shown capability in diagnosing complex medical cases and passing medical licensing exams, but to date, only limited evaluations have studied how LLMs interpret, analyze, and optimize complex medication regimens. The purpose of this evaluation was to test four LLMs ability to identify medication errors and appropriate medication interventions on complex patient cases from the intensive care unit (ICU). Methods: A series of eight patient cases were developed by critical care pharmacists including history of present illness, laboratory values, vital signs, and medication regimens. Then, four LLMs (ChatGPT (GPT-3.5), ChatGPT (GPT-4), Claude2, and Llama2-7b) were prompted to develop a medication regimen for the patient. LLM generated medication regimens were then reviewed by a panel of seven critical care pharmacists to assess for presence of medication errors and clinical relevance. For each medication regimen recommended by the LLM, clinicians were asked to assess for if they would continue a medication, identify perceived medication errors in the medications recommended, identify the presence of life-threatening medication choices, and rank overall agreement on a 5-point Likert scale. Results: The clinician panel rated to continue therapies recommended by the LLMs between 55.8-67.9% of the time. Clinicians perceived between 1.57-4.29 medication errors per recommended regimen, and life-threatening recommendations were present between 15.0-55.3% of the time. Level agreement was between 1.85-2.67 for the four LLMs. Conclusions: LLMs demonstrated potential to serve as clinical decision support for the management of complex medication regimens with further domain specific training; however, caution should be used when employing LLMs for medication management given the present capabilities.