Qian Ruan, Jinghong Shi, Yunke Dai, Pingliang Yang, Na Zhu, Shun Wang
{"title":"Performance of Large Language Models in Complex Anesthesia Decision-Making: A Comparative Study of Four LLMs in High-Risk Patients.","authors":"Qian Ruan, Jinghong Shi, Yunke Dai, Pingliang Yang, Na Zhu, Shun Wang","doi":"10.1007/s10916-025-02247-3","DOIUrl":null,"url":null,"abstract":"<p><p>To evaluate and compare the performance of four Large Language Models (LLMs) in anesthesia decision-making for critically ill obstetric and geriatric patients and analyze their decision reliability across different surgical specialties. Prospective comparative analysis using standardized case evaluations. Four LLMs (ChatGPT-4o, Claude 3.5 Sonnet, DeepSeek-R1, and Grok 3). Thirty complex surgical cases (10 obstetric, 20 geriatric; 8 specialties) were analyzed. A 12-dimensional framework tested the models using unified prompts and decision points. Five trained anesthesiologists independently evaluated the models across six dimensions (patient assessment, anesthesia plan, risk management, individualization, contingency planning, decision logic; 1-10 scale, total 6-60). Overall, DeepSeek performed best (51.43 ± 2.74 points), significantly outperforming other models (P < 0.001). For obstetric cases, the mean scores were: DeepSeek (52.00 ± 1.83), Grok (49.40 ± 3.06), ChatGPT (47.60 ± 2.88), and Claude (46.60 ± 2.17). For geriatric cases, scores were: DeepSeek (51.15 ± 3.10), Grok (48.60 ± 2.33), ChatGPT (47.35 ± 2.50), and Claude (45.75 ± 2.05). Across specialties, all models performed best in hepatobiliary surgery, burn surgery, and thoracic surgery. DeepSeek demonstrated consistent performance across all dimensions, with notable advantages in decision logic (8.80 ± 0.40) and contingency planning (8.27 ± 0.45). All LLMs demonstrated strong anesthesia decision-making capabilities, with DeepSeek showing the best overall performance. Exploratory analysis revealed performance variations across specialties, although small sample sizes preclude definitive conclusions. Clinical implementation should consider specialty-specific factors and decision process characteristics.</p>","PeriodicalId":16338,"journal":{"name":"Journal of Medical Systems","volume":"49 1","pages":"122"},"PeriodicalIF":5.7000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Systems","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s10916-025-02247-3","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
To evaluate and compare the performance of four Large Language Models (LLMs) in anesthesia decision-making for critically ill obstetric and geriatric patients and analyze their decision reliability across different surgical specialties. Prospective comparative analysis using standardized case evaluations. Four LLMs (ChatGPT-4o, Claude 3.5 Sonnet, DeepSeek-R1, and Grok 3). Thirty complex surgical cases (10 obstetric, 20 geriatric; 8 specialties) were analyzed. A 12-dimensional framework tested the models using unified prompts and decision points. Five trained anesthesiologists independently evaluated the models across six dimensions (patient assessment, anesthesia plan, risk management, individualization, contingency planning, decision logic; 1-10 scale, total 6-60). Overall, DeepSeek performed best (51.43 ± 2.74 points), significantly outperforming other models (P < 0.001). For obstetric cases, the mean scores were: DeepSeek (52.00 ± 1.83), Grok (49.40 ± 3.06), ChatGPT (47.60 ± 2.88), and Claude (46.60 ± 2.17). For geriatric cases, scores were: DeepSeek (51.15 ± 3.10), Grok (48.60 ± 2.33), ChatGPT (47.35 ± 2.50), and Claude (45.75 ± 2.05). Across specialties, all models performed best in hepatobiliary surgery, burn surgery, and thoracic surgery. DeepSeek demonstrated consistent performance across all dimensions, with notable advantages in decision logic (8.80 ± 0.40) and contingency planning (8.27 ± 0.45). All LLMs demonstrated strong anesthesia decision-making capabilities, with DeepSeek showing the best overall performance. Exploratory analysis revealed performance variations across specialties, although small sample sizes preclude definitive conclusions. Clinical implementation should consider specialty-specific factors and decision process characteristics.
期刊介绍:
Journal of Medical Systems provides a forum for the presentation and discussion of the increasingly extensive applications of new systems techniques and methods in hospital clinic and physician''s office administration; pathology radiology and pharmaceutical delivery systems; medical records storage and retrieval; and ancillary patient-support systems. The journal publishes informative articles essays and studies across the entire scale of medical systems from large hospital programs to novel small-scale medical services. Education is an integral part of this amalgamation of sciences and selected articles are published in this area. Since existing medical systems are constantly being modified to fit particular circumstances and to solve specific problems the journal includes a special section devoted to status reports on current installations.