Comparing Large Language Models for antibiotic prescribing in different clinical scenarios: which perform better?

IF 10.9 1区 医学 Q1 INFECTIOUS DISEASES
Andrea De Vito, Nicholas Geremia, Davide Fiore Bavaro, Susan K Seo, Justin Laracy, Maria Mazzitelli, Andrea Marino, Alberto Enrico Maraolo, Antonio Russo, Agnese Colpani, Michele Bartoletti, Anna Maria Cattelan, Cristina Mussini, Saverio Giuseppe Parisi, Luigi Angelo Vaira, Giuseppe Nunnari, Giordano Madeddu
{"title":"Comparing Large Language Models for antibiotic prescribing in different clinical scenarios: which perform better?","authors":"Andrea De Vito, Nicholas Geremia, Davide Fiore Bavaro, Susan K Seo, Justin Laracy, Maria Mazzitelli, Andrea Marino, Alberto Enrico Maraolo, Antonio Russo, Agnese Colpani, Michele Bartoletti, Anna Maria Cattelan, Cristina Mussini, Saverio Giuseppe Parisi, Luigi Angelo Vaira, Giuseppe Nunnari, Giordano Madeddu","doi":"10.1016/j.cmi.2025.03.002","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Large language models (LLMs) show promise in clinical decision-making, but comparative evaluations of their antibiotic prescribing accuracy are limited. This study assesses the performance of various LLMs in recommending antibiotic treatments across diverse clinical scenarios.</p><p><strong>Methods: </strong>Fourteen LLMs, including standard and premium versions of ChatGPT, Claude, Copilot, Gemini, Le Chat, Grok, Perplexity, and Pi.ai, were evaluated using 60 clinical cases with antibiograms covering ten infection types. A standardised prompt was used for antibiotic recommendations focusing on drug choice, dosage, and treatment duration. Responses were anonymised and reviewed by a blinded expert panel assessing antibiotic appropriateness, dosage correctness, and duration adequacy.</p><p><strong>Results: </strong>A total of 840 responses were collected and analysed. ChatGPT-o1 demonstrated the highest accuracy in antibiotic prescriptions, with 71.7%(43/60) of its recommendations classified as correct and only one (1.7%) incorrect. Gemini and Claude 3 Opus had the lowest accuracy. Dosage correctness was highest for ChatGPT-o1 (96.7%, 58/60), followed by Perplexity Pro (90.0%, 54/60) and Claude 3.5Sonnet (91.7%, 55/60). In treatment duration, Gemini provided the most appropriate recommendations (75.0%, 45/60), while Claude 3.5 Sonnet tended to over-prescribe duration. Performance declined with increasing case complexity, particularly for difficult-to-treat microorganisms.</p><p><strong>Conclusions: </strong>There is significant variability among LLMs in prescribing appropriate antibiotics, dosages, and treatment durations. ChatGPT-o1 outperformed other models, indicating the potential of advanced LLMs as decision-support tools in antibiotic prescribing. However, decreased accuracy in complex cases and inconsistencies among models highlight the need for careful validation before clinical utilisation.</p>","PeriodicalId":10444,"journal":{"name":"Clinical Microbiology and Infection","volume":" ","pages":""},"PeriodicalIF":10.9000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Microbiology and Infection","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.cmi.2025.03.002","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFECTIOUS DISEASES","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: Large language models (LLMs) show promise in clinical decision-making, but comparative evaluations of their antibiotic prescribing accuracy are limited. This study assesses the performance of various LLMs in recommending antibiotic treatments across diverse clinical scenarios.

Methods: Fourteen LLMs, including standard and premium versions of ChatGPT, Claude, Copilot, Gemini, Le Chat, Grok, Perplexity, and Pi.ai, were evaluated using 60 clinical cases with antibiograms covering ten infection types. A standardised prompt was used for antibiotic recommendations focusing on drug choice, dosage, and treatment duration. Responses were anonymised and reviewed by a blinded expert panel assessing antibiotic appropriateness, dosage correctness, and duration adequacy.

Results: A total of 840 responses were collected and analysed. ChatGPT-o1 demonstrated the highest accuracy in antibiotic prescriptions, with 71.7%(43/60) of its recommendations classified as correct and only one (1.7%) incorrect. Gemini and Claude 3 Opus had the lowest accuracy. Dosage correctness was highest for ChatGPT-o1 (96.7%, 58/60), followed by Perplexity Pro (90.0%, 54/60) and Claude 3.5Sonnet (91.7%, 55/60). In treatment duration, Gemini provided the most appropriate recommendations (75.0%, 45/60), while Claude 3.5 Sonnet tended to over-prescribe duration. Performance declined with increasing case complexity, particularly for difficult-to-treat microorganisms.

Conclusions: There is significant variability among LLMs in prescribing appropriate antibiotics, dosages, and treatment durations. ChatGPT-o1 outperformed other models, indicating the potential of advanced LLMs as decision-support tools in antibiotic prescribing. However, decreased accuracy in complex cases and inconsistencies among models highlight the need for careful validation before clinical utilisation.

求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
25.30
自引率
2.10%
发文量
441
审稿时长
2-4 weeks
期刊介绍: Clinical Microbiology and Infection (CMI) is a monthly journal published by the European Society of Clinical Microbiology and Infectious Diseases. It focuses on peer-reviewed papers covering basic and applied research in microbiology, infectious diseases, virology, parasitology, immunology, and epidemiology as they relate to therapy and diagnostics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信