A hybrid multi-instance learning-based identification of gastric adenocarcinoma differentiation on whole-slide images.

IF 2.9 4区 医学 Q3 ENGINEERING, BIOMEDICAL
Mudan Zhang, Xinhuan Sun, Wuchao Li, Yin Cao, Chen Liu, Guilan Tu, Jian Wang, Rongpin Wang
{"title":"A hybrid multi-instance learning-based identification of gastric adenocarcinoma differentiation on whole-slide images.","authors":"Mudan Zhang, Xinhuan Sun, Wuchao Li, Yin Cao, Chen Liu, Guilan Tu, Jian Wang, Rongpin Wang","doi":"10.1186/s12938-025-01407-3","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To investigate the potential of a hybrid multi-instance learning model (TGMIL) combining Transformer and graph attention networks for classifying gastric adenocarcinoma differentiation on whole-slide images (WSIs) without manual annotation.</p><p><strong>Methods and materials: </strong>A hybrid multi-instance learning model is proposed based on the Transformer and the graph attention network, called TGMIL, to classify the differentiation of gastric adenocarcinoma. A total of 613 WSIs from patients with gastric adenocarcinoma were retrospectively collected from two different hospitals. According to the differentiation of gastric adenocarcinoma, the data were divided into four groups: normal group (n = 254), well differentiation group (n = 166), moderately differentiation group (n = 75), and poorly differentiation group (n = 118). The gold standard of differentiation classification was blindly established by two gastrointestinal pathologists. The WSIs were randomly split into a training dataset consisting of 494 images and a testing dataset consisting of 119 images. Within the training set, the WSI count of the normal, well, moderately, and poorly differential groups was 203, 131, 62, and 98 individuals, respectively. Within the test set, the corresponding WSI count was 51, 35, 13, and 20 individuals.</p><p><strong>Results: </strong>The TGMIL model developed for the differential prediction task exhibited remarkable efficiency when considering sensitivity, specificity, and the area under the curve (AUC) values. We also conducted a comparative analysis to assess the efficiency of five other models, namely MIL, CLAM_SB, CLAM_MB, DSMIL, and TransMIL, in classifying the differentiation of gastric cancer. The TGMIL model achieved a sensitivity of 73.33% and a specificity of 91.11%, with an AUC value of 0.86.</p><p><strong>Conclusions: </strong>The hybrid multi-instance learning model TGMIL could accurately classify the differentiation of gastric adenocarcinoma using WSI without the need for labor-intensive and time-consuming manual annotations, which will improve the efficiency and objectivity of diagnosis.</p>","PeriodicalId":8927,"journal":{"name":"BioMedical Engineering OnLine","volume":"24 1","pages":"79"},"PeriodicalIF":2.9000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12199488/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BioMedical Engineering OnLine","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1186/s12938-025-01407-3","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: To investigate the potential of a hybrid multi-instance learning model (TGMIL) combining Transformer and graph attention networks for classifying gastric adenocarcinoma differentiation on whole-slide images (WSIs) without manual annotation.

Methods and materials: A hybrid multi-instance learning model is proposed based on the Transformer and the graph attention network, called TGMIL, to classify the differentiation of gastric adenocarcinoma. A total of 613 WSIs from patients with gastric adenocarcinoma were retrospectively collected from two different hospitals. According to the differentiation of gastric adenocarcinoma, the data were divided into four groups: normal group (n = 254), well differentiation group (n = 166), moderately differentiation group (n = 75), and poorly differentiation group (n = 118). The gold standard of differentiation classification was blindly established by two gastrointestinal pathologists. The WSIs were randomly split into a training dataset consisting of 494 images and a testing dataset consisting of 119 images. Within the training set, the WSI count of the normal, well, moderately, and poorly differential groups was 203, 131, 62, and 98 individuals, respectively. Within the test set, the corresponding WSI count was 51, 35, 13, and 20 individuals.

Results: The TGMIL model developed for the differential prediction task exhibited remarkable efficiency when considering sensitivity, specificity, and the area under the curve (AUC) values. We also conducted a comparative analysis to assess the efficiency of five other models, namely MIL, CLAM_SB, CLAM_MB, DSMIL, and TransMIL, in classifying the differentiation of gastric cancer. The TGMIL model achieved a sensitivity of 73.33% and a specificity of 91.11%, with an AUC value of 0.86.

Conclusions: The hybrid multi-instance learning model TGMIL could accurately classify the differentiation of gastric adenocarcinoma using WSI without the need for labor-intensive and time-consuming manual annotations, which will improve the efficiency and objectivity of diagnosis.

基于混合多实例学习的全片胃腺癌分化识别。
目的:探讨结合Transformer和图注意网络的混合多实例学习模型(TGMIL)在不需要人工标注的全切片图像(wsi)上对胃腺癌分化进行分类的潜力。方法与材料:提出了一种基于Transformer和图注意网络的混合多实例学习模型TGMIL对胃腺癌的分化进行分类。回顾性收集两家不同医院胃腺癌患者的613例wsi。根据胃腺癌的分化情况,将资料分为4组:正常组(n = 254)、良好分化组(n = 166)、中度分化组(n = 75)、低分化组(n = 118)。辨证分型的金标准是由两位胃肠病理学家盲目建立的。wsi被随机分成由494张图像组成的训练数据集和由119张图像组成的测试数据集。在训练集中,正常、良好、中等和差差异组的WSI计数分别为203、131、62和98个个体。在测试集中,相应的WSI计数分别为51、35、13和20人。结果:在考虑敏感性、特异性和曲线下面积(AUC)值的情况下,为差异预测任务建立的TGMIL模型具有显著的效率。我们还比较分析了MIL、CLAM_SB、CLAM_MB、DSMIL、TransMIL等5种模型对胃癌分化的分类效率。TGMIL模型的灵敏度为73.33%,特异性为91.11%,AUC值为0.86。结论:混合多实例学习模型TGMIL能够准确地利用WSI对胃腺癌的分化进行分类,不需要耗费大量劳动和时间的人工注释,提高了诊断的效率和客观性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
BioMedical Engineering OnLine
BioMedical Engineering OnLine 工程技术-工程:生物医学
CiteScore
6.70
自引率
2.60%
发文量
79
审稿时长
1 months
期刊介绍: BioMedical Engineering OnLine is an open access, peer-reviewed journal that is dedicated to publishing research in all areas of biomedical engineering. BioMedical Engineering OnLine is aimed at readers and authors throughout the world, with an interest in using tools of the physical and data sciences and techniques in engineering to understand and solve problems in the biological and medical sciences. Topical areas include, but are not limited to: Bioinformatics- Bioinstrumentation- Biomechanics- Biomedical Devices & Instrumentation- Biomedical Signal Processing- Healthcare Information Systems- Human Dynamics- Neural Engineering- Rehabilitation Engineering- Biomaterials- Biomedical Imaging & Image Processing- BioMEMS and On-Chip Devices- Bio-Micro/Nano Technologies- Biomolecular Engineering- Biosensors- Cardiovascular Systems Engineering- Cellular Engineering- Clinical Engineering- Computational Biology- Drug Delivery Technologies- Modeling Methodologies- Nanomaterials and Nanotechnology in Biomedicine- Respiratory Systems Engineering- Robotics in Medicine- Systems and Synthetic Biology- Systems Biology- Telemedicine/Smartphone Applications in Medicine- Therapeutic Systems, Devices and Technologies- Tissue Engineering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信