Qi Li , Jingxian Wu , Xiyu Liu , Dengwang Li , Jie Xue
{"title":"Uncertainty-driven hybrid-view adaptive learning for fully automated uterine leiomyosarcoma diagnosis","authors":"Qi Li , Jingxian Wu , Xiyu Liu , Dengwang Li , Jie Xue","doi":"10.1016/j.media.2025.103692","DOIUrl":null,"url":null,"abstract":"<div><div>Uterine leiomyosarcoma (ULMS) is a rare malignant tumor of the smooth muscle of the uterine wall that is aggressive and has a poor prognosis. Accurately and automatically classifying histopathological whole-slide images (WSIs) is critical for clinically diagnosing ULMS. However, few works have investigated automated ULMS diagnosis methods due to its high degrees of concealment and phenotype diversity. In this study, we present a novel uncertainty-driven hybrid-view adaptive learning (UHAL) framework to efficiently capture the distinct features of ULMS by mining pivotal biomarkers at the cell level and minimizing the redundancy from hybrid views under an uncertainty discrimination mechanism, ultimately ensuring reliable diagnoses of ULMS WSIs. Specifically, hybrid-view adaptive learning incorporates three modules: phenotype-driven patch self-optimization to select salient patch features, unsupervised inter-bags adaptive learning effectively filters out redundant information, and compensatory inner-level adaptive learning further refines tumor features. Furthermore, the uncertainty discrimination mechanism achieves enhanced reliability by assigning quantitative confidence coefficients to predictions under the Dirichlet distribution, leveraging uncertainty to update the features for obtaining accurate diagnoses. The experimental results obtained on the ULMS dataset indicate the superior performance of the proposed framework over that of ten state-of-the-art methods. Extensive experimental results obtained on the TCGA-Esca, TCGA-Lung, and Spinal infection datasets further validate the robustness and generalizability of the UHAL framework.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103692"},"PeriodicalIF":10.7000,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525002397","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Uterine leiomyosarcoma (ULMS) is a rare malignant tumor of the smooth muscle of the uterine wall that is aggressive and has a poor prognosis. Accurately and automatically classifying histopathological whole-slide images (WSIs) is critical for clinically diagnosing ULMS. However, few works have investigated automated ULMS diagnosis methods due to its high degrees of concealment and phenotype diversity. In this study, we present a novel uncertainty-driven hybrid-view adaptive learning (UHAL) framework to efficiently capture the distinct features of ULMS by mining pivotal biomarkers at the cell level and minimizing the redundancy from hybrid views under an uncertainty discrimination mechanism, ultimately ensuring reliable diagnoses of ULMS WSIs. Specifically, hybrid-view adaptive learning incorporates three modules: phenotype-driven patch self-optimization to select salient patch features, unsupervised inter-bags adaptive learning effectively filters out redundant information, and compensatory inner-level adaptive learning further refines tumor features. Furthermore, the uncertainty discrimination mechanism achieves enhanced reliability by assigning quantitative confidence coefficients to predictions under the Dirichlet distribution, leveraging uncertainty to update the features for obtaining accurate diagnoses. The experimental results obtained on the ULMS dataset indicate the superior performance of the proposed framework over that of ten state-of-the-art methods. Extensive experimental results obtained on the TCGA-Esca, TCGA-Lung, and Spinal infection datasets further validate the robustness and generalizability of the UHAL framework.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.