Jesús Ignacio Mazadiego Cid , María del Rosario Herrero Maceda , Paloma Montserrat Diego Salazar , Rogelio Zapata Arenas , Scherezada María Isabel Mejía Loza , Juanita Pérez Escobar , María Fátima Higuera de la Tijera , Elías Artemio San Vicente Parada , Raquel Yazmín López Pérez , Felipe Zamarripa Dorsey , Yoali Maribel Velasco Santiago , Adriana López Luria , Moises Coutiño Flores , Alejandra Díaz García
{"title":"CONCORDANCE BETWEEN EXPERT GASTROENTEROLOGISTS AND ARTIFICIAL INTELLIGENCE TOOLS IN SOLVING HEPATOLOGY CLINICAL CASES","authors":"Jesús Ignacio Mazadiego Cid , María del Rosario Herrero Maceda , Paloma Montserrat Diego Salazar , Rogelio Zapata Arenas , Scherezada María Isabel Mejía Loza , Juanita Pérez Escobar , María Fátima Higuera de la Tijera , Elías Artemio San Vicente Parada , Raquel Yazmín López Pérez , Felipe Zamarripa Dorsey , Yoali Maribel Velasco Santiago , Adriana López Luria , Moises Coutiño Flores , Alejandra Díaz García","doi":"10.1016/j.aohep.2025.102032","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction and Objectives</h3><div>Evidence regarding the utility of artificial intelligences (AI) for the diagnosis of clinical cases in gastroenterology is limited, and is even scarcer in hepatology.</div><div>Determine the concordance between the responses of various AI models and those of specialist physicians in the resolution of hepatology clinical cases.</div></div><div><h3>Materials and Methods</h3><div>This was a clinical, observational, analytical, and prospective study. The assessment instrument comprised six hepatology clinical cases, each featuring five questions. A panel of eight experts from different institutions was convened; and their individual responses were subjected to calculation of the kappa coefficient (κ) and Cronbach’s alpha. Items that failed to meet the validation threshold (≥ 80 % agreement and κ ≥ 0.6) were reviewed through iterative rounds of a modified Delphi method. Finally, κ was calculated to evaluate concordance between responses generated by the AI models and the expert consensus.</div></div><div><h3>Results</h3><div>The expert consensus demonstrated a high overall concordance (κ = 0.901; 95 % CI [0.860, 0.943]; z = 61.57; p < 0.001). Individual model concordance ranged from moderate to substantial, with κ values between 0.539 (Meditron-7B) and 0.784 (ChatGPT-4.0 and ChatGPT-4.0 Turbo), all statistically significant. In terms of the percentage of correct responses, the highest performing models were ChatGPT-4.0, ChatGPT-4.0 Turbo, and Deepseek-R1 (figure 1).</div></div><div><h3>Conclusions</h3><div>A moderate to substantial concordance was observed between diagnoses generated by different AI models and expert judgment in hepatology clinical cases, although variations were noted among the evaluated systems.</div></div>","PeriodicalId":7979,"journal":{"name":"Annals of hepatology","volume":"30 ","pages":"Article 102032"},"PeriodicalIF":4.4000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of hepatology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1665268125002571","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"GASTROENTEROLOGY & HEPATOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction and Objectives
Evidence regarding the utility of artificial intelligences (AI) for the diagnosis of clinical cases in gastroenterology is limited, and is even scarcer in hepatology.
Determine the concordance between the responses of various AI models and those of specialist physicians in the resolution of hepatology clinical cases.
Materials and Methods
This was a clinical, observational, analytical, and prospective study. The assessment instrument comprised six hepatology clinical cases, each featuring five questions. A panel of eight experts from different institutions was convened; and their individual responses were subjected to calculation of the kappa coefficient (κ) and Cronbach’s alpha. Items that failed to meet the validation threshold (≥ 80 % agreement and κ ≥ 0.6) were reviewed through iterative rounds of a modified Delphi method. Finally, κ was calculated to evaluate concordance between responses generated by the AI models and the expert consensus.
Results
The expert consensus demonstrated a high overall concordance (κ = 0.901; 95 % CI [0.860, 0.943]; z = 61.57; p < 0.001). Individual model concordance ranged from moderate to substantial, with κ values between 0.539 (Meditron-7B) and 0.784 (ChatGPT-4.0 and ChatGPT-4.0 Turbo), all statistically significant. In terms of the percentage of correct responses, the highest performing models were ChatGPT-4.0, ChatGPT-4.0 Turbo, and Deepseek-R1 (figure 1).
Conclusions
A moderate to substantial concordance was observed between diagnoses generated by different AI models and expert judgment in hepatology clinical cases, although variations were noted among the evaluated systems.
期刊介绍:
Annals of Hepatology publishes original research on the biology and diseases of the liver in both humans and experimental models. Contributions may be submitted as regular articles. The journal also publishes concise reviews of both basic and clinical topics.