比较ChatGPT3.5和Bard结肠镜检查间隔的建议:弥合医疗保健环境中的差距。

IF 2.3 Q3 GASTROENTEROLOGY & HEPATOLOGY
Endoscopy International Open Pub Date : 2025-06-17 eCollection Date: 2025-01-01 DOI:10.1055/a-2586-5912
Maziar Amini, Patrick W Chang, Rio O Davis, Denis D Nguyen, Jennifer L Dodge, Jennifer Phan, James Buxbaum, Ara Sahakian
{"title":"比较ChatGPT3.5和Bard结肠镜检查间隔的建议:弥合医疗保健环境中的差距。","authors":"Maziar Amini, Patrick W Chang, Rio O Davis, Denis D Nguyen, Jennifer L Dodge, Jennifer Phan, James Buxbaum, Ara Sahakian","doi":"10.1055/a-2586-5912","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and study aims: </strong>Colorectal cancer is a leading cause of cancer-related deaths, with screening and surveillance colonoscopy playing a crucial role in early detection. This study examined the efficacy of two freely available large language models (LLMs), GPT3.5 and Bard, in recommending colonoscopy intervals in diverse healthcare settings.</p><p><strong>Patients and methods: </strong>A cross-sectional study was conducted using data from routine colonoscopies at a large safety-net and a private tertiary hospital. GPT3.5 and Bard were tasked with recommending screening intervals based on colonoscopy reports and pathology data and their accuracy and inter-rater reliability were compared to a guideline-directed endoscopist panel.</p><p><strong>Results: </strong>Of 549 colonoscopies analyzed (N = 268 at safety-net and N = 281 private hospital), GPT3.5 showed better concordance with guideline recommendations (GPT3.5: 60.4% vs. Bard: 50.0%, <i>P</i> < 0.001). In the safety-net hospital, GPT3.5 had a 60.5% concordance rate with the panel compared with Bard's 45.7% ( <i>P</i> < 0.001). For the private hospital, concordance was 60.3% for GPT3.5 and 54.3% for Bard ( <i>P</i> = 0.13). GPT3.5 showed fair agreement with the panel (kappa = 0.324), whereas Bard displayed lower agreement (kappa = 0.219). For the safety-net hospital, GPT3.5 showed fair agreement with the panel (kappa = 0.340) whereas Bard showed slight agreement (kappa = 0.148). For the private hospital, both GPT3.5 and Bard demonstrated fair agreement with the panel (kappa = 0.295 and 0.282, respectively).</p><p><strong>Conclusions: </strong>This study highlights the limitations of freely available LLMs in assisting colonoscopy screening recommendations. Although the potential of freely available LLMs to offer uniformity is significant, the low accuracy, as noted, excludes their use as the sole agent in providing recommendations.</p>","PeriodicalId":11671,"journal":{"name":"Endoscopy International Open","volume":"13 ","pages":"a25865912"},"PeriodicalIF":2.3000,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12223955/pdf/","citationCount":"0","resultStr":"{\"title\":\"Comparing ChatGPT3.5 and Bard recommendations for colonoscopy intervals: Bridging the gap in healthcare settings.\",\"authors\":\"Maziar Amini, Patrick W Chang, Rio O Davis, Denis D Nguyen, Jennifer L Dodge, Jennifer Phan, James Buxbaum, Ara Sahakian\",\"doi\":\"10.1055/a-2586-5912\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background and study aims: </strong>Colorectal cancer is a leading cause of cancer-related deaths, with screening and surveillance colonoscopy playing a crucial role in early detection. This study examined the efficacy of two freely available large language models (LLMs), GPT3.5 and Bard, in recommending colonoscopy intervals in diverse healthcare settings.</p><p><strong>Patients and methods: </strong>A cross-sectional study was conducted using data from routine colonoscopies at a large safety-net and a private tertiary hospital. GPT3.5 and Bard were tasked with recommending screening intervals based on colonoscopy reports and pathology data and their accuracy and inter-rater reliability were compared to a guideline-directed endoscopist panel.</p><p><strong>Results: </strong>Of 549 colonoscopies analyzed (N = 268 at safety-net and N = 281 private hospital), GPT3.5 showed better concordance with guideline recommendations (GPT3.5: 60.4% vs. Bard: 50.0%, <i>P</i> < 0.001). In the safety-net hospital, GPT3.5 had a 60.5% concordance rate with the panel compared with Bard's 45.7% ( <i>P</i> < 0.001). For the private hospital, concordance was 60.3% for GPT3.5 and 54.3% for Bard ( <i>P</i> = 0.13). GPT3.5 showed fair agreement with the panel (kappa = 0.324), whereas Bard displayed lower agreement (kappa = 0.219). For the safety-net hospital, GPT3.5 showed fair agreement with the panel (kappa = 0.340) whereas Bard showed slight agreement (kappa = 0.148). For the private hospital, both GPT3.5 and Bard demonstrated fair agreement with the panel (kappa = 0.295 and 0.282, respectively).</p><p><strong>Conclusions: </strong>This study highlights the limitations of freely available LLMs in assisting colonoscopy screening recommendations. Although the potential of freely available LLMs to offer uniformity is significant, the low accuracy, as noted, excludes their use as the sole agent in providing recommendations.</p>\",\"PeriodicalId\":11671,\"journal\":{\"name\":\"Endoscopy International Open\",\"volume\":\"13 \",\"pages\":\"a25865912\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12223955/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Endoscopy International Open\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1055/a-2586-5912\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q3\",\"JCRName\":\"GASTROENTEROLOGY & HEPATOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Endoscopy International Open","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1055/a-2586-5912","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"GASTROENTEROLOGY & HEPATOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

背景与研究目的:结直肠癌是癌症相关死亡的主要原因,结肠镜筛查和监测在早期发现中起着至关重要的作用。本研究检验了两种免费的大型语言模型(llm) GPT3.5和Bard在不同医疗环境中推荐结肠镜检查间隔的功效。患者和方法:使用大型安全网和私立三级医院的常规结肠镜检查数据进行了横断面研究。GPT3.5和Bard的任务是根据结肠镜检查报告和病理数据推荐筛查间隔,并将其准确性和评分间可靠性与指导内窥镜医师小组进行比较。结果:在549例结肠镜检查中(安全网医院268例,私立医院281例),GPT3.5与指南建议的一致性更好(GPT3.5: 60.4% vs. Bard: 50.0%, P < 0.001)。在安全网医院,GPT3.5的符合率为60.5%,巴德的符合率为45.7% (P < 0.001)。私立医院GPT3.5和Bard的符合率分别为60.3%和54.3% (P = 0.13)。GPT3.5与panel的一致性较好(kappa = 0.324),而Bard的一致性较低(kappa = 0.219)。对于安全网医院,GPT3.5与专家组的一致性较好(kappa = 0.340),而Bard与专家组的一致性较弱(kappa = 0.148)。对于私立医院,GPT3.5和Bard均表现出与专家组的公平一致(kappa分别为0.295和0.282)。结论:本研究强调了免费llm在协助结肠镜筛查建议方面的局限性。尽管免费提供的法学硕士提供一致性的潜力是巨大的,但正如所指出的,其低准确性排除了它们作为提供建议的唯一代理的使用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Comparing ChatGPT3.5 and Bard recommendations for colonoscopy intervals: Bridging the gap in healthcare settings.

Comparing ChatGPT3.5 and Bard recommendations for colonoscopy intervals: Bridging the gap in healthcare settings.

Comparing ChatGPT3.5 and Bard recommendations for colonoscopy intervals: Bridging the gap in healthcare settings.

Comparing ChatGPT3.5 and Bard recommendations for colonoscopy intervals: Bridging the gap in healthcare settings.

Background and study aims: Colorectal cancer is a leading cause of cancer-related deaths, with screening and surveillance colonoscopy playing a crucial role in early detection. This study examined the efficacy of two freely available large language models (LLMs), GPT3.5 and Bard, in recommending colonoscopy intervals in diverse healthcare settings.

Patients and methods: A cross-sectional study was conducted using data from routine colonoscopies at a large safety-net and a private tertiary hospital. GPT3.5 and Bard were tasked with recommending screening intervals based on colonoscopy reports and pathology data and their accuracy and inter-rater reliability were compared to a guideline-directed endoscopist panel.

Results: Of 549 colonoscopies analyzed (N = 268 at safety-net and N = 281 private hospital), GPT3.5 showed better concordance with guideline recommendations (GPT3.5: 60.4% vs. Bard: 50.0%, P < 0.001). In the safety-net hospital, GPT3.5 had a 60.5% concordance rate with the panel compared with Bard's 45.7% ( P < 0.001). For the private hospital, concordance was 60.3% for GPT3.5 and 54.3% for Bard ( P = 0.13). GPT3.5 showed fair agreement with the panel (kappa = 0.324), whereas Bard displayed lower agreement (kappa = 0.219). For the safety-net hospital, GPT3.5 showed fair agreement with the panel (kappa = 0.340) whereas Bard showed slight agreement (kappa = 0.148). For the private hospital, both GPT3.5 and Bard demonstrated fair agreement with the panel (kappa = 0.295 and 0.282, respectively).

Conclusions: This study highlights the limitations of freely available LLMs in assisting colonoscopy screening recommendations. Although the potential of freely available LLMs to offer uniformity is significant, the low accuracy, as noted, excludes their use as the sole agent in providing recommendations.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Endoscopy International Open
Endoscopy International Open GASTROENTEROLOGY & HEPATOLOGY-
自引率
3.80%
发文量
270
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信