Enhancing diagnostics: ChatGPT-4 performance in ulcerative colitis endoscopic assessment.

IF 2.2 Q3 GASTROENTEROLOGY & HEPATOLOGY
Endoscopy International Open Pub Date : 2025-03-14 eCollection Date: 2025-01-01 DOI:10.1055/a-2542-0943
Asaf Levartovsky, Ahmad Albshesh, Ana Grinman, Eyal Shachar, Adi Lahat, Rami Eliakim, Uri Kopylov
{"title":"Enhancing diagnostics: ChatGPT-4 performance in ulcerative colitis endoscopic assessment.","authors":"Asaf Levartovsky, Ahmad Albshesh, Ana Grinman, Eyal Shachar, Adi Lahat, Rami Eliakim, Uri Kopylov","doi":"10.1055/a-2542-0943","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and study aims: </strong>The Mayo Endoscopic Subscore (MES) is widely utilized for assessing mucosal activity in ulcerative colitis (UC). Artificial intelligence has emerged as a promising tool for enhancing diagnostic precision and addressing interobserver variability. This study evaluated the diagnostic accuracy of ChatGPT-4, a multimodal large language model, in identifying and grading endoscopic images of UC patients using the MES.</p><p><strong>Patients and methods: </strong>Real-world endoscopic images of UC patients were reviewed by an expert consensus board. Each image was graded based on the MES. Only images that were uniformly graded were subsequently provided to three inflammatory bowel disease (IBD) specialists and ChatGPT-4. Severity gradings of the IBD specialists and ChatGPT-4 were compared with assessments made by the expert consensus board.</p><p><strong>Results: </strong>Thirty of 50 images were graded with complete agreement among the experts. Compared with the consensus board, ChatGPT-4 gradings had a mean accuracy rate of 78.9% whereas the mean accuracy rate for the IBD specialists was 81.1%. Between the two groups, there was no statistically significant difference in mean accuracy rates ( <i>P</i> = 0.71) and a high degree of reliability was found.</p><p><strong>Conclusions: </strong>ChatGPT-4 has the potential to assess mucosal inflammation severity from endoscopic images of UC patients, without prior configuration or fine-tuning. Performance rates were comparable to those of IBD specialists.</p>","PeriodicalId":11671,"journal":{"name":"Endoscopy International Open","volume":"13 ","pages":"a25420943"},"PeriodicalIF":2.2000,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11922305/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Endoscopy International Open","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1055/a-2542-0943","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"GASTROENTEROLOGY & HEPATOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Background and study aims: The Mayo Endoscopic Subscore (MES) is widely utilized for assessing mucosal activity in ulcerative colitis (UC). Artificial intelligence has emerged as a promising tool for enhancing diagnostic precision and addressing interobserver variability. This study evaluated the diagnostic accuracy of ChatGPT-4, a multimodal large language model, in identifying and grading endoscopic images of UC patients using the MES.

Patients and methods: Real-world endoscopic images of UC patients were reviewed by an expert consensus board. Each image was graded based on the MES. Only images that were uniformly graded were subsequently provided to three inflammatory bowel disease (IBD) specialists and ChatGPT-4. Severity gradings of the IBD specialists and ChatGPT-4 were compared with assessments made by the expert consensus board.

Results: Thirty of 50 images were graded with complete agreement among the experts. Compared with the consensus board, ChatGPT-4 gradings had a mean accuracy rate of 78.9% whereas the mean accuracy rate for the IBD specialists was 81.1%. Between the two groups, there was no statistically significant difference in mean accuracy rates ( P = 0.71) and a high degree of reliability was found.

Conclusions: ChatGPT-4 has the potential to assess mucosal inflammation severity from endoscopic images of UC patients, without prior configuration or fine-tuning. Performance rates were comparable to those of IBD specialists.

加强诊断:ChatGPT-4在溃疡性结肠炎内镜评估中的表现。
背景和研究目的:梅奥内镜评分(MES)被广泛用于评估溃疡性结肠炎(UC)的粘膜活性。人工智能已经成为提高诊断精度和解决观察者之间差异的有前途的工具。本研究评估了ChatGPT-4(一种多模态大语言模型)在使用MES识别和分级UC患者内镜图像方面的诊断准确性。患者和方法:UC患者的真实内镜图像由专家共识委员会审查。根据MES对每张图像进行分级。随后,只有被统一分级的图像被提供给三位炎症性肠病(IBD)专家和ChatGPT-4。将IBD专家和ChatGPT-4的严重程度分级与专家共识委员会的评估进行比较。结果:50幅图像中有30幅在专家之间评分完全一致。与共识委员会相比,ChatGPT-4评分的平均准确率为78.9%,而IBD专家的平均准确率为81.1%。两组平均准确率比较,差异无统计学意义(P = 0.71),具有较高的信度。结论:ChatGPT-4具有从UC患者的内镜图像评估粘膜炎症严重程度的潜力,无需事先配置或微调。绩效率与IBD专家相当。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Endoscopy International Open
Endoscopy International Open GASTROENTEROLOGY & HEPATOLOGY-
自引率
3.80%
发文量
270
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信