A Novel Switching of Artificial Intelligence to Generate Simultaneously Multimodal Images to Assess Inflammation and Predict Outcomes in Ulcerative Colitis-(With Video).
Marietta Iacucci, Irene Zammarchi, Giovanni Santacroce, Bisi Bode Kolawole, Ujwala Chaudhari, Rocio Del Amor, Pablo Meseguer, Valery Naranjo, Miguel Puga-Tejada, Ivan Capobianco, Ilaria Ditonno, Andrea Buda, Brian Hayes, Rory Crotty, Raf Bisschops, Subrata Ghosh, Enrico Grisan
{"title":"A Novel Switching of Artificial Intelligence to Generate Simultaneously Multimodal Images to Assess Inflammation and Predict Outcomes in Ulcerative Colitis-(With Video).","authors":"Marietta Iacucci, Irene Zammarchi, Giovanni Santacroce, Bisi Bode Kolawole, Ujwala Chaudhari, Rocio Del Amor, Pablo Meseguer, Valery Naranjo, Miguel Puga-Tejada, Ivan Capobianco, Ilaria Ditonno, Andrea Buda, Brian Hayes, Rory Crotty, Raf Bisschops, Subrata Ghosh, Enrico Grisan","doi":"10.1111/den.15067","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Virtual Chromoendoscopy (VCE) is pivotal for assessing activity and predicting outcomes in Ulcerative Colitis (UC), though interobserver variability and the need for expertise persist. Artificial intelligence (AI) offers standardized VCE-based assessment. This study introduces a novel AI model to detect and simultaneously generate various endoscopic modalities, enhancing AI-driven inflammation assessment and outcome prediction in UC.</p><p><strong>Methods: </strong>Endoscopic videos in high-definition white-light, iScan2, iScan3, and NBI from UC patients of the international PICaSSO iScan and NBI cohort (302 and 54 patients, respectively) were used to develop a neural network to identify the acquisition modality of each frame and for inter-modality image switching. 2535 frames from 169 videos of the iScan cohort were switched to different modalities and trained a deep-learning model for inflammation assessment. Subsequently, the model was tested on a subset of the iScan and NBI cohorts (72 and 51 videos, respectively). Performance in predicting endoscopic and histological activity and outcomes was evaluated.</p><p><strong>Results: </strong>The model efficiently classified and converted images across modalities (92% accuracy). Performance in predicting endoscopic and histological remission was excellent, especially with different modalities combined in both iScan (accuracy 81.3% and 89.6%; AUROC 0.92 and 0.89 by UCEIS and PICaSSO, respectively) and the NBI cohort. Moreover, it showed a remarkable ability in predicting clinical outcomes.</p><p><strong>Conclusions: </strong>Our multimodal \"AI-switching\" model innovatively detects and transitions between different endoscopic modalities, refining inflammation assessment and outcome prediction in UC by integrating model-derived images.</p>","PeriodicalId":72813,"journal":{"name":"Digestive endoscopy : official journal of the Japan Gastroenterological Endoscopy Society","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digestive endoscopy : official journal of the Japan Gastroenterological Endoscopy Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1111/den.15067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives: Virtual Chromoendoscopy (VCE) is pivotal for assessing activity and predicting outcomes in Ulcerative Colitis (UC), though interobserver variability and the need for expertise persist. Artificial intelligence (AI) offers standardized VCE-based assessment. This study introduces a novel AI model to detect and simultaneously generate various endoscopic modalities, enhancing AI-driven inflammation assessment and outcome prediction in UC.
Methods: Endoscopic videos in high-definition white-light, iScan2, iScan3, and NBI from UC patients of the international PICaSSO iScan and NBI cohort (302 and 54 patients, respectively) were used to develop a neural network to identify the acquisition modality of each frame and for inter-modality image switching. 2535 frames from 169 videos of the iScan cohort were switched to different modalities and trained a deep-learning model for inflammation assessment. Subsequently, the model was tested on a subset of the iScan and NBI cohorts (72 and 51 videos, respectively). Performance in predicting endoscopic and histological activity and outcomes was evaluated.
Results: The model efficiently classified and converted images across modalities (92% accuracy). Performance in predicting endoscopic and histological remission was excellent, especially with different modalities combined in both iScan (accuracy 81.3% and 89.6%; AUROC 0.92 and 0.89 by UCEIS and PICaSSO, respectively) and the NBI cohort. Moreover, it showed a remarkable ability in predicting clinical outcomes.
Conclusions: Our multimodal "AI-switching" model innovatively detects and transitions between different endoscopic modalities, refining inflammation assessment and outcome prediction in UC by integrating model-derived images.