流行人工智能聊天机器人治疗腹股沟疝的准确性和可读性对比分析。

IF 1 4区 医学 Q3 SURGERY
Thisun Udagedara, Ashley Tran, Sumaya Bokhari, Sharon Shiraga, Stuart Abel, Caitlin Houghton, Katie Galvin, Kamran Samakar, Luke R Putnam
{"title":"流行人工智能聊天机器人治疗腹股沟疝的准确性和可读性对比分析。","authors":"Thisun Udagedara, Ashley Tran, Sumaya Bokhari, Sharon Shiraga, Stuart Abel, Caitlin Houghton, Katie Galvin, Kamran Samakar, Luke R Putnam","doi":"10.1177/00031348251353065","DOIUrl":null,"url":null,"abstract":"<p><p>BackgroundArtificial intelligence (AI), particularly large language models (LLMs), has gained attention for its clinical applications. While LLMs have shown utility in various medical fields, their performance in inguinal hernia repair (IHR) remains understudied. This study seeks to evaluate the accuracy and readability of LLM-generated responses to IHR-related questions, as well as their performance across distinct clinical categories.MethodsThirty questions were developed based on clinical guidelines for IHR and categorized into four subgroups: diagnosis, perioperative care, surgical management, and other. Questions were entered into Microsoft Copilot®, Google Gemini®, and OpenAI ChatGPT-4®. Responses were anonymized and evaluated by six fellowship-trained, minimally invasive surgeons using a validated 5-point Likert scale. Readability was assessed with six validated formulae.ResultsGPT-4 and Gemini outperformed Copilot in overall mean scores for response accuracy (Copilot: 3.75 ± 0.99, Gemini: 4.35 ± 0.82, and GPT-4: 4.30 ± 0.89; <i>P</i> < 0.001). Subgroup analysis revealed significantly higher scores for Gemini and GPT-4 in perioperative care (<i>P</i> = 0.025) and surgical management (<i>P</i> < 0.001). Readability scores were comparable across models, with all responses at college to college-graduate reading levels.DiscussionThis study highlights the variability in LLM performance, with GPT-4 and Gemini producing higher-quality responses than Copilot for IHR-related questions. However, the consistently high reading level of responses may limit accessibility for patients. These findings underscore the potential of LLMs to serve as valuable adjunct tools in surgical practice, with ongoing advancements expected to further enhance their accuracy, readability, and applicability.</p>","PeriodicalId":7782,"journal":{"name":"American Surgeon","volume":" ","pages":"31348251353065"},"PeriodicalIF":1.0000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Comparative Analysis of the Accuracy and Readability of Popular Artificial Intelligence-Chat Bots for Inguinal Hernia Management.\",\"authors\":\"Thisun Udagedara, Ashley Tran, Sumaya Bokhari, Sharon Shiraga, Stuart Abel, Caitlin Houghton, Katie Galvin, Kamran Samakar, Luke R Putnam\",\"doi\":\"10.1177/00031348251353065\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>BackgroundArtificial intelligence (AI), particularly large language models (LLMs), has gained attention for its clinical applications. While LLMs have shown utility in various medical fields, their performance in inguinal hernia repair (IHR) remains understudied. This study seeks to evaluate the accuracy and readability of LLM-generated responses to IHR-related questions, as well as their performance across distinct clinical categories.MethodsThirty questions were developed based on clinical guidelines for IHR and categorized into four subgroups: diagnosis, perioperative care, surgical management, and other. Questions were entered into Microsoft Copilot®, Google Gemini®, and OpenAI ChatGPT-4®. Responses were anonymized and evaluated by six fellowship-trained, minimally invasive surgeons using a validated 5-point Likert scale. Readability was assessed with six validated formulae.ResultsGPT-4 and Gemini outperformed Copilot in overall mean scores for response accuracy (Copilot: 3.75 ± 0.99, Gemini: 4.35 ± 0.82, and GPT-4: 4.30 ± 0.89; <i>P</i> < 0.001). Subgroup analysis revealed significantly higher scores for Gemini and GPT-4 in perioperative care (<i>P</i> = 0.025) and surgical management (<i>P</i> < 0.001). Readability scores were comparable across models, with all responses at college to college-graduate reading levels.DiscussionThis study highlights the variability in LLM performance, with GPT-4 and Gemini producing higher-quality responses than Copilot for IHR-related questions. However, the consistently high reading level of responses may limit accessibility for patients. These findings underscore the potential of LLMs to serve as valuable adjunct tools in surgical practice, with ongoing advancements expected to further enhance their accuracy, readability, and applicability.</p>\",\"PeriodicalId\":7782,\"journal\":{\"name\":\"American Surgeon\",\"volume\":\" \",\"pages\":\"31348251353065\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2025-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"American Surgeon\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/00031348251353065\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"SURGERY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Surgeon","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/00031348251353065","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI),特别是大型语言模型(llm)的临床应用已经引起了人们的关注。虽然llm已经在各个医学领域显示出效用,但它们在腹股沟疝修复(IHR)中的表现仍未得到充分研究。本研究旨在评估法学硕士对国际卫生条例相关问题的回答的准确性和可读性,以及它们在不同临床类别中的表现。方法根据《国际卫生条例》临床指南编制30个问题,分为诊断、围手术期护理、手术处理和其他4个亚组。在Microsoft Copilot®、谷歌Gemini®和OpenAI ChatGPT-4®中输入问题。回答是匿名的,并由6名接受过奖学金培训的微创外科医生使用有效的5分李克特量表进行评估。用六个经过验证的公式评估可读性。结果GPT-4和Gemini在反应准确性的总体平均得分上优于Copilot (Copilot: 3.75±0.99,Gemini: 4.35±0.82,GPT-4: 4.30±0.89;P < 0.001)。亚组分析显示,Gemini和GPT-4在围手术期护理(P = 0.025)和手术管理(P < 0.001)中的得分显著高于对照组。可读性得分在不同模型之间具有可比性,所有的回答都是在大学和大学毕业生的阅读水平之间。本研究强调了法学硕士表现的可变性,GPT-4和Gemini在ihr相关问题上的回答质量高于Copilot。然而,持续的高阅读水平可能会限制患者的可及性。这些发现强调了llm在外科实践中作为有价值的辅助工具的潜力,随着不断的进步,有望进一步提高其准确性、可读性和适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Comparative Analysis of the Accuracy and Readability of Popular Artificial Intelligence-Chat Bots for Inguinal Hernia Management.

BackgroundArtificial intelligence (AI), particularly large language models (LLMs), has gained attention for its clinical applications. While LLMs have shown utility in various medical fields, their performance in inguinal hernia repair (IHR) remains understudied. This study seeks to evaluate the accuracy and readability of LLM-generated responses to IHR-related questions, as well as their performance across distinct clinical categories.MethodsThirty questions were developed based on clinical guidelines for IHR and categorized into four subgroups: diagnosis, perioperative care, surgical management, and other. Questions were entered into Microsoft Copilot®, Google Gemini®, and OpenAI ChatGPT-4®. Responses were anonymized and evaluated by six fellowship-trained, minimally invasive surgeons using a validated 5-point Likert scale. Readability was assessed with six validated formulae.ResultsGPT-4 and Gemini outperformed Copilot in overall mean scores for response accuracy (Copilot: 3.75 ± 0.99, Gemini: 4.35 ± 0.82, and GPT-4: 4.30 ± 0.89; P < 0.001). Subgroup analysis revealed significantly higher scores for Gemini and GPT-4 in perioperative care (P = 0.025) and surgical management (P < 0.001). Readability scores were comparable across models, with all responses at college to college-graduate reading levels.DiscussionThis study highlights the variability in LLM performance, with GPT-4 and Gemini producing higher-quality responses than Copilot for IHR-related questions. However, the consistently high reading level of responses may limit accessibility for patients. These findings underscore the potential of LLMs to serve as valuable adjunct tools in surgical practice, with ongoing advancements expected to further enhance their accuracy, readability, and applicability.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
American Surgeon
American Surgeon 医学-外科
CiteScore
1.40
自引率
0.00%
发文量
623
期刊介绍: The American Surgeon is a monthly peer-reviewed publication published by the Southeastern Surgical Congress. Its area of concentration is clinical general surgery, as defined by the content areas of the American Board of Surgery: alimentary tract (including bariatric surgery), abdomen and its contents, breast, skin and soft tissue, endocrine system, solid organ transplantation, pediatric surgery, surgical critical care, surgical oncology (including head and neck surgery), trauma and emergency surgery, and vascular surgery.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信