Jiao Li, Xiaojuan Jing, Qin Zhang, Xiaoxiang Wang, Li Wang, Jing Shan, Zhengkui Zhou, Lin Fan, Xun Gong, Xiaobin Sun, Song He
{"title":"多模态人工智能用于上皮下病变分类和表征:一项多中心比较研究(带视频)。","authors":"Jiao Li, Xiaojuan Jing, Qin Zhang, Xiaoxiang Wang, Li Wang, Jing Shan, Zhengkui Zhou, Lin Fan, Xun Gong, Xiaobin Sun, Song He","doi":"10.1186/s12911-025-03147-9","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Subepithelial lesions (SELs) present significant diagnostic challenges in gastrointestinal endoscopy, particularly in differentiating malignant types, such as gastrointestinal stromal tumors (GISTs) and neuroendocrine tumors, from benign types like leiomyomas. Misdiagnosis can lead to unnecessary interventions or delayed treatment. To address this challenge, we developed ECMAI-WME, a parallel fusion deep learning model integrating white light endoscopy (WLE) and microprobe endoscopic ultrasonography (EUS), to improve SEL classification and lesion characterization.</p><p><strong>Methods: </strong>A total of 523 SELs from four hospitals were used to develop serial and parallel fusion AI models. The Parallel Model, demonstrating superior performance, was designated as ECMAI-WME. The model was tested on an external validation cohort (n = 88) and a multicenter test cohort (n = 274). Diagnostic performance, lesion characterization, and clinical decision-making support were comprehensively evaluated and compared with endoscopists' performance.</p><p><strong>Results: </strong>The ECMAI-WME model significantly outperformed endoscopists in diagnostic accuracy (96.35% vs. 63.87-86.13%, p < 0.001) and treatment decision-making accuracy (96.35% vs. 78.47-86.13%, p < 0.001). It achieved 98.72% accuracy in internal validation, 94.32% in external validation, and 96.35% in multicenter testing. For distinguishing gastric GISTs from leiomyomas, the model reached 91.49% sensitivity, 100% specificity, and 96.38% accuracy. Lesion characteristics were identified with a mean accuracy of 94.81% (range: 90.51-99.27%). The model maintained robust performance despite class imbalance, confirmed by five complementary analyses. Subgroup analyses showed consistent accuracy across lesion size, location, or type (p > 0.05), demonstrating strong generalizability.</p><p><strong>Conclusions: </strong>The ECMAI-WME model demonstrates excellent diagnostic performance and robustness in the multiclass SEL classification and characterization, supporting its potential for real-time deployment to enhance diagnostic consistency and guide clinical decision-making.</p>","PeriodicalId":9340,"journal":{"name":"BMC Medical Informatics and Decision Making","volume":"25 1","pages":"307"},"PeriodicalIF":3.8000,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12355745/pdf/","citationCount":"0","resultStr":"{\"title\":\"Multimodal artificial intelligence for subepithelial lesion classification and characterization: a multicenter comparative study (with video).\",\"authors\":\"Jiao Li, Xiaojuan Jing, Qin Zhang, Xiaoxiang Wang, Li Wang, Jing Shan, Zhengkui Zhou, Lin Fan, Xun Gong, Xiaobin Sun, Song He\",\"doi\":\"10.1186/s12911-025-03147-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Subepithelial lesions (SELs) present significant diagnostic challenges in gastrointestinal endoscopy, particularly in differentiating malignant types, such as gastrointestinal stromal tumors (GISTs) and neuroendocrine tumors, from benign types like leiomyomas. Misdiagnosis can lead to unnecessary interventions or delayed treatment. To address this challenge, we developed ECMAI-WME, a parallel fusion deep learning model integrating white light endoscopy (WLE) and microprobe endoscopic ultrasonography (EUS), to improve SEL classification and lesion characterization.</p><p><strong>Methods: </strong>A total of 523 SELs from four hospitals were used to develop serial and parallel fusion AI models. The Parallel Model, demonstrating superior performance, was designated as ECMAI-WME. The model was tested on an external validation cohort (n = 88) and a multicenter test cohort (n = 274). Diagnostic performance, lesion characterization, and clinical decision-making support were comprehensively evaluated and compared with endoscopists' performance.</p><p><strong>Results: </strong>The ECMAI-WME model significantly outperformed endoscopists in diagnostic accuracy (96.35% vs. 63.87-86.13%, p < 0.001) and treatment decision-making accuracy (96.35% vs. 78.47-86.13%, p < 0.001). It achieved 98.72% accuracy in internal validation, 94.32% in external validation, and 96.35% in multicenter testing. For distinguishing gastric GISTs from leiomyomas, the model reached 91.49% sensitivity, 100% specificity, and 96.38% accuracy. Lesion characteristics were identified with a mean accuracy of 94.81% (range: 90.51-99.27%). The model maintained robust performance despite class imbalance, confirmed by five complementary analyses. Subgroup analyses showed consistent accuracy across lesion size, location, or type (p > 0.05), demonstrating strong generalizability.</p><p><strong>Conclusions: </strong>The ECMAI-WME model demonstrates excellent diagnostic performance and robustness in the multiclass SEL classification and characterization, supporting its potential for real-time deployment to enhance diagnostic consistency and guide clinical decision-making.</p>\",\"PeriodicalId\":9340,\"journal\":{\"name\":\"BMC Medical Informatics and Decision Making\",\"volume\":\"25 1\",\"pages\":\"307\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-08-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12355745/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMC Medical Informatics and Decision Making\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s12911-025-03147-9\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MEDICAL INFORMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Informatics and Decision Making","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12911-025-03147-9","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0
摘要
背景:上皮下病变(SELs)在胃肠道内窥镜检查中存在显著的诊断挑战,特别是在区分胃肠道间质瘤(gist)和神经内分泌肿瘤等恶性类型与平滑肌瘤等良性类型时。误诊可导致不必要的干预或延迟治疗。为了解决这一挑战,我们开发了ECMAI-WME,这是一种融合白光内窥镜(WLE)和微探针内窥镜超声检查(EUS)的并行融合深度学习模型,以改进SEL分类和病变表征。方法:利用4家医院523例sel进行串并联融合AI模型的开发。并行模型表现出较好的性能,被命名为ECMAI-WME。采用外部验证队列(n = 88)和多中心测试队列(n = 274)对模型进行检验。对诊断表现、病变特征和临床决策支持进行综合评估,并与内窥镜医师的表现进行比较。结果:ECMAI-WME模型的诊断准确率明显优于内镜医师(96.35% vs. 63.87-86.13%, p 0.05),具有较强的通用性。结论:ECMAI-WME模型在多类别SEL分类和表征方面表现出出色的诊断性能和鲁棒性,支持其实时部署的潜力,以提高诊断一致性和指导临床决策。
Multimodal artificial intelligence for subepithelial lesion classification and characterization: a multicenter comparative study (with video).
Background: Subepithelial lesions (SELs) present significant diagnostic challenges in gastrointestinal endoscopy, particularly in differentiating malignant types, such as gastrointestinal stromal tumors (GISTs) and neuroendocrine tumors, from benign types like leiomyomas. Misdiagnosis can lead to unnecessary interventions or delayed treatment. To address this challenge, we developed ECMAI-WME, a parallel fusion deep learning model integrating white light endoscopy (WLE) and microprobe endoscopic ultrasonography (EUS), to improve SEL classification and lesion characterization.
Methods: A total of 523 SELs from four hospitals were used to develop serial and parallel fusion AI models. The Parallel Model, demonstrating superior performance, was designated as ECMAI-WME. The model was tested on an external validation cohort (n = 88) and a multicenter test cohort (n = 274). Diagnostic performance, lesion characterization, and clinical decision-making support were comprehensively evaluated and compared with endoscopists' performance.
Results: The ECMAI-WME model significantly outperformed endoscopists in diagnostic accuracy (96.35% vs. 63.87-86.13%, p < 0.001) and treatment decision-making accuracy (96.35% vs. 78.47-86.13%, p < 0.001). It achieved 98.72% accuracy in internal validation, 94.32% in external validation, and 96.35% in multicenter testing. For distinguishing gastric GISTs from leiomyomas, the model reached 91.49% sensitivity, 100% specificity, and 96.38% accuracy. Lesion characteristics were identified with a mean accuracy of 94.81% (range: 90.51-99.27%). The model maintained robust performance despite class imbalance, confirmed by five complementary analyses. Subgroup analyses showed consistent accuracy across lesion size, location, or type (p > 0.05), demonstrating strong generalizability.
Conclusions: The ECMAI-WME model demonstrates excellent diagnostic performance and robustness in the multiclass SEL classification and characterization, supporting its potential for real-time deployment to enhance diagnostic consistency and guide clinical decision-making.
期刊介绍:
BMC Medical Informatics and Decision Making is an open access journal publishing original peer-reviewed research articles in relation to the design, development, implementation, use, and evaluation of health information technologies and decision-making for human health.