比较不同临床情况下抗生素处方的大语言模型:哪个表现更好?

IF 10.9 1区 医学 Q1 INFECTIOUS DISEASES
Andrea De Vito, Nicholas Geremia, Davide Fiore Bavaro, Susan K Seo, Justin Laracy, Maria Mazzitelli, Andrea Marino, Alberto Enrico Maraolo, Antonio Russo, Agnese Colpani, Michele Bartoletti, Anna Maria Cattelan, Cristina Mussini, Saverio Giuseppe Parisi, Luigi Angelo Vaira, Giuseppe Nunnari, Giordano Madeddu
{"title":"比较不同临床情况下抗生素处方的大语言模型:哪个表现更好?","authors":"Andrea De Vito, Nicholas Geremia, Davide Fiore Bavaro, Susan K Seo, Justin Laracy, Maria Mazzitelli, Andrea Marino, Alberto Enrico Maraolo, Antonio Russo, Agnese Colpani, Michele Bartoletti, Anna Maria Cattelan, Cristina Mussini, Saverio Giuseppe Parisi, Luigi Angelo Vaira, Giuseppe Nunnari, Giordano Madeddu","doi":"10.1016/j.cmi.2025.03.002","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Large language models (LLMs) show promise in clinical decision-making, but comparative evaluations of their antibiotic prescribing accuracy are limited. This study assesses the performance of various LLMs in recommending antibiotic treatments across diverse clinical scenarios.</p><p><strong>Methods: </strong>Fourteen LLMs, including standard and premium versions of ChatGPT, Claude, Copilot, Gemini, Le Chat, Grok, Perplexity, and Pi.ai, were evaluated using 60 clinical cases with antibiograms covering 10 infection types. A standardized prompt was used for antibiotic recommendations focusing on drug choice, dosage, and treatment duration. Responses were anonymized and reviewed by a blinded expert panel assessing antibiotic appropriateness, dosage correctness, and duration adequacy.</p><p><strong>Results: </strong>A total of 840 responses were collected and analysed. ChatGPT-o1 demonstrated the highest accuracy in antibiotic prescriptions, with 71.7% (43/60) of its recommendations classified as correct and only one (1.7%) incorrect. Gemini and Claude 3 Opus had the lowest accuracy. Dosage correctness was highest for ChatGPT-o1 (96.7%, 58/60), followed by Perplexity Pro (90.0%, 54/60) and Claude 3.5 Sonnet (91.7%, 55/60). In treatment duration, Gemini provided the most appropriate recommendations (75.0%, 45/60), whereas Claude 3.5 Sonnet tended to over-prescribe duration. Performance declined with increasing case complexity, particularly for difficult-to-treat microorganisms.</p><p><strong>Discussion: </strong>There is significant variability among LLMs in prescribing appropriate antibiotics, dosages, and treatment durations. ChatGPT-o1 outperformed other models, indicating the potential of advanced LLMs as decision-support tools in antibiotic prescribing. However, decreased accuracy in complex cases and inconsistencies among models highlight the need for careful validation before clinical utilization.</p>","PeriodicalId":10444,"journal":{"name":"Clinical Microbiology and Infection","volume":" ","pages":""},"PeriodicalIF":10.9000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparing large language models for antibiotic prescribing in different clinical scenarios: which performs better?\",\"authors\":\"Andrea De Vito, Nicholas Geremia, Davide Fiore Bavaro, Susan K Seo, Justin Laracy, Maria Mazzitelli, Andrea Marino, Alberto Enrico Maraolo, Antonio Russo, Agnese Colpani, Michele Bartoletti, Anna Maria Cattelan, Cristina Mussini, Saverio Giuseppe Parisi, Luigi Angelo Vaira, Giuseppe Nunnari, Giordano Madeddu\",\"doi\":\"10.1016/j.cmi.2025.03.002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>Large language models (LLMs) show promise in clinical decision-making, but comparative evaluations of their antibiotic prescribing accuracy are limited. This study assesses the performance of various LLMs in recommending antibiotic treatments across diverse clinical scenarios.</p><p><strong>Methods: </strong>Fourteen LLMs, including standard and premium versions of ChatGPT, Claude, Copilot, Gemini, Le Chat, Grok, Perplexity, and Pi.ai, were evaluated using 60 clinical cases with antibiograms covering 10 infection types. A standardized prompt was used for antibiotic recommendations focusing on drug choice, dosage, and treatment duration. Responses were anonymized and reviewed by a blinded expert panel assessing antibiotic appropriateness, dosage correctness, and duration adequacy.</p><p><strong>Results: </strong>A total of 840 responses were collected and analysed. ChatGPT-o1 demonstrated the highest accuracy in antibiotic prescriptions, with 71.7% (43/60) of its recommendations classified as correct and only one (1.7%) incorrect. Gemini and Claude 3 Opus had the lowest accuracy. Dosage correctness was highest for ChatGPT-o1 (96.7%, 58/60), followed by Perplexity Pro (90.0%, 54/60) and Claude 3.5 Sonnet (91.7%, 55/60). In treatment duration, Gemini provided the most appropriate recommendations (75.0%, 45/60), whereas Claude 3.5 Sonnet tended to over-prescribe duration. Performance declined with increasing case complexity, particularly for difficult-to-treat microorganisms.</p><p><strong>Discussion: </strong>There is significant variability among LLMs in prescribing appropriate antibiotics, dosages, and treatment durations. ChatGPT-o1 outperformed other models, indicating the potential of advanced LLMs as decision-support tools in antibiotic prescribing. However, decreased accuracy in complex cases and inconsistencies among models highlight the need for careful validation before clinical utilization.</p>\",\"PeriodicalId\":10444,\"journal\":{\"name\":\"Clinical Microbiology and Infection\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":10.9000,\"publicationDate\":\"2025-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Clinical Microbiology and Infection\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1016/j.cmi.2025.03.002\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFECTIOUS DISEASES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Microbiology and Infection","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.cmi.2025.03.002","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFECTIOUS DISEASES","Score":null,"Total":0}
引用次数: 0

摘要

目的:大型语言模型(LLMs)在临床决策中显示出希望,但对其抗生素处方准确性的比较评估有限。本研究评估了不同llm在不同临床情况下推荐抗生素治疗的表现。方法:14个llm,包括标准和高级版本的ChatGPT, Claude, Copilot, Gemini, Le Chat, Grok, Perplexity和Pi。对60例临床病例进行10种感染类型的抗生素图评估。标准化提示用于抗生素推荐,重点是药物选择,剂量和治疗时间。反应是匿名的,并由一个盲法专家小组评估抗生素的适宜性、剂量的正确性和持续时间的充分性。结果:共收集并分析了840份问卷。chatgpt - 01在抗生素处方中显示出最高的准确性,其中71.7%(43/60)的建议被归类为正确,只有一个(1.7%)不正确。Gemini和Claude 3 Opus的准确率最低。chatgpt - 01的剂量正确性最高(96.7%,58/60),其次是Perplexity Pro(90.0%, 54/60)和Claude 3.5Sonnet(91.7%, 55/60)。在治疗时间上,Gemini提供了最合适的建议(75.0%,45/60),而Claude 3.5 Sonnet倾向于过度规定治疗时间。随着病例复杂性的增加,特别是对于难以治疗的微生物,性能下降。结论:法学硕士在处方合适的抗生素、剂量和治疗时间方面存在显著差异。chatgpt - 01优于其他模型,表明先进llm作为抗生素处方决策支持工具的潜力。然而,在复杂病例中准确性的降低和模型之间的不一致性突出了在临床应用前仔细验证的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Comparing large language models for antibiotic prescribing in different clinical scenarios: which performs better?

Objectives: Large language models (LLMs) show promise in clinical decision-making, but comparative evaluations of their antibiotic prescribing accuracy are limited. This study assesses the performance of various LLMs in recommending antibiotic treatments across diverse clinical scenarios.

Methods: Fourteen LLMs, including standard and premium versions of ChatGPT, Claude, Copilot, Gemini, Le Chat, Grok, Perplexity, and Pi.ai, were evaluated using 60 clinical cases with antibiograms covering 10 infection types. A standardized prompt was used for antibiotic recommendations focusing on drug choice, dosage, and treatment duration. Responses were anonymized and reviewed by a blinded expert panel assessing antibiotic appropriateness, dosage correctness, and duration adequacy.

Results: A total of 840 responses were collected and analysed. ChatGPT-o1 demonstrated the highest accuracy in antibiotic prescriptions, with 71.7% (43/60) of its recommendations classified as correct and only one (1.7%) incorrect. Gemini and Claude 3 Opus had the lowest accuracy. Dosage correctness was highest for ChatGPT-o1 (96.7%, 58/60), followed by Perplexity Pro (90.0%, 54/60) and Claude 3.5 Sonnet (91.7%, 55/60). In treatment duration, Gemini provided the most appropriate recommendations (75.0%, 45/60), whereas Claude 3.5 Sonnet tended to over-prescribe duration. Performance declined with increasing case complexity, particularly for difficult-to-treat microorganisms.

Discussion: There is significant variability among LLMs in prescribing appropriate antibiotics, dosages, and treatment durations. ChatGPT-o1 outperformed other models, indicating the potential of advanced LLMs as decision-support tools in antibiotic prescribing. However, decreased accuracy in complex cases and inconsistencies among models highlight the need for careful validation before clinical utilization.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
25.30
自引率
2.10%
发文量
441
审稿时长
2-4 weeks
期刊介绍: Clinical Microbiology and Infection (CMI) is a monthly journal published by the European Society of Clinical Microbiology and Infectious Diseases. It focuses on peer-reviewed papers covering basic and applied research in microbiology, infectious diseases, virology, parasitology, immunology, and epidemiology as they relate to therapy and diagnostics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信