Medical image analysis最新文献

筛选
英文 中文
A new time-decay radiomics integrated network (TRINet) for breast cancer risk prediction 一种新的时间衰减放射组学集成网络(TRINet)用于乳腺癌风险预测
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-10-01 DOI: 10.1016/j.media.2025.103829
Hong Hui Yeoh , Fredrik Strand , Raphaël Phan , Kartini Rahmat , Maxine Tan
{"title":"A new time-decay radiomics integrated network (TRINet) for breast cancer risk prediction","authors":"Hong Hui Yeoh ,&nbsp;Fredrik Strand ,&nbsp;Raphaël Phan ,&nbsp;Kartini Rahmat ,&nbsp;Maxine Tan","doi":"10.1016/j.media.2025.103829","DOIUrl":"10.1016/j.media.2025.103829","url":null,"abstract":"<div><div>To facilitate early detection of breast cancer, there is a need to develop risk prediction schemes that can prescribe personalized screening mammography regimens for women. In this study, we propose a new deep learning architecture called TRINet that implements time-decay attention to focus on recent mammographic screenings, as current models do not account for the relevance of newer images. We integrate radiomic features with an Attention-based Multiple Instance Learning (AMIL) framework to weigh and combine multiple views for better risk estimation. In addition, we introduce a continual learning approach with a new label assignment strategy based on bilateral asymmetry to make the model more adaptable to asymmetrical cancer indicators. Finally, we add a time-embedded additive hazard layer to perform dynamic, multi-year risk forecasting based on individualized screening intervals. We used two public datasets, namely 8528 patients from the American EMBED dataset and 8723 patients from the Swedish CSAW dataset in our experiments. Evaluation results on the EMBED test set show that our approach performs comparably with state-of-the-art models, achieving AUC scores of 0.851, 0.811, 0.796, 0.793, and 0.789 across 1-, 2-, to 5-year intervals, respectively. Our results underscore the importance of integrating temporal attention, radiomic features, time embeddings, bilateral asymmetry, and continual learning strategies, providing a more adaptive and precise tool for breast cancer risk prediction.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103829"},"PeriodicalIF":11.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145268286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
S2CAC: Semi-supervised coronary artery calcium segmentation via scoring-driven consistency and negative sample boosting S2CAC:通过评分驱动一致性和阴性样本增强进行半监督冠状动脉钙分割
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-30 DOI: 10.1016/j.media.2025.103823
Jinkui Hao , Nilay S. Shah , Bo Zhou
{"title":"S2CAC: Semi-supervised coronary artery calcium segmentation via scoring-driven consistency and negative sample boosting","authors":"Jinkui Hao ,&nbsp;Nilay S. Shah ,&nbsp;Bo Zhou","doi":"10.1016/j.media.2025.103823","DOIUrl":"10.1016/j.media.2025.103823","url":null,"abstract":"<div><div>Coronary artery calcium (CAC) scoring plays a pivotal role in assessing the risk for cardiovascular disease events to guide the intensity of cardiovascular disease preventive efforts. Accurate CAC scoring from gated cardiac Computed Tomography (CT) relies on precise segmentation of calcification. However, the small size, irregular shape, and sparse distribution of calcification in 3D volumes present significant challenges for automated CAC assessment. Training reliable automatic segmentation models typically requires large-scale annotated datasets, yet the annotation process is resource-intensive, requiring highly trained specialists. To address this limitation, we propose S<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>CAC, a semi-supervised learning framework for CAC segmentation that achieves robust performance with minimal labeled data. First, we design a dual-path hybrid transformer architecture that jointly optimizes pixel-level segmentation and volume-level scoring through feature symbiosis, minimizing the information loss caused by down-sampling operations and enhancing the model’s ability to preserve fine-grained calcification details. Second, we introduce a scoring-driven consistency mechanism that aligns pixel-level segmentation with volume-level CAC scores through differentiable score estimation, effectively leveraging unlabeled data. Third, we address the challenge of incorporating negative samples (cases without CAC) into training. Directly using these samples risks model collapse, as the sparse nature of CAC regions may lead the model to predict all-zero maps. To mitigate this, we design a dynamic weighted loss function that integrates negative samples into the training process while preserving the model’s sensitivity to calcification. This approach effectively reduces over-segmentation and enhances overall model performance. We validate our framework on two public non-contrast gated CT datasets, achieving state-of-the-art performance over previous baseline methods. Additionally, the Agatston scores derived from our segmentation maps demonstrate strong concordance with manual annotations. These results highlight the potential of our approach to reduce dependence on annotated data while maintaining high accuracy in CAC scoring. Code and trained model weights are available at: <span><span>https://github.com/JinkuiH/S2CAC</span><svg><path></path></svg></span></div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103823"},"PeriodicalIF":11.8,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic prediction of depth of invasion in oral tongue squamous cell carcinoma using a multimodal regression network fusing prior text and anatomical knowledge 使用融合先验文本和解剖学知识的多模态回归网络自动预测口腔舌鳞癌的浸润深度
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-30 DOI: 10.1016/j.media.2025.103824
Jiangchang Xu , Weiqing Tang , Pheng-Ann Heng , Xiaojun Chen
{"title":"Automatic prediction of depth of invasion in oral tongue squamous cell carcinoma using a multimodal regression network fusing prior text and anatomical knowledge","authors":"Jiangchang Xu ,&nbsp;Weiqing Tang ,&nbsp;Pheng-Ann Heng ,&nbsp;Xiaojun Chen","doi":"10.1016/j.media.2025.103824","DOIUrl":"10.1016/j.media.2025.103824","url":null,"abstract":"<div><div>Oral tongue squamous cell carcinoma (OTSCC) is one of the most common malignant tumors in oral cancer. Its depth of invasion (DOI) serves as a crucial indicator for evaluating tumor invasiveness, predicting the risk of lymph node metastasis, and assessing patient prognosis. Compared to invasive measurement methods on pathology, DOI measurement on magnetic resonance imaging (MRI) is a non-invasive approach that can provide a timely reference for preoperative surgical planning. However, this method has several limitations, including a cumbersome measurement process, strong subjectivity, high experience requirements, and poor prediction stability. To address these issues, we propose an automatic prediction algorithm for OTSCC DOI using a multimodal regression network that fuses prior text and anatomical knowledge. First, the automatic segmentation of OTSCC is achieved using 3D nnUNet on multimodal MRI. Second, an automatic DOI measurement method that combines the detection of basement membrane landmarks with anatomical relationships is proposed to obtain 3D heatmap landmarks and prior DOI text. These elements are then fused into the proposed multimodal regression network to realize the automatic prediction of OTSCC DOI. Experimental results demonstrate that our method achieves a mean absolute error (MAE) of 2.11 mm, a root mean square error (RMSE) of 2.97 mm, and a mean squared error (MSE) of 8.81 mm<span><math><msup><mrow></mrow><mrow><mtext>2</mtext></mrow></msup></math></span>, which are markedly better than several state-of-the-art (SOTA) methods. The correlation with the pathological ground truth reaches a Pearson correlation coefficient (PCC) of 0.869, indicating high consistency. Additionally, our method outperforms the manual measurements of a resident doctor and a radiologist with six years of clinical experience. In the future, our method will have good clinical application prospects in OTSCC DOI prediction. The source code is available at <span><span>https://github.com/Lambater/Depth-of-invasion-prediction</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103824"},"PeriodicalIF":11.8,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145229516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep association analysis framework with multi-modal attention fusion for brain imaging genetics 基于多模态注意融合的脑成像遗传学深度关联分析框架。
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-29 DOI: 10.1016/j.media.2025.103827
Shuang-Qing Wang , Cui-Na Jiao , Ying-Lian Gao , Xin-Chun Cui , Yan-Li Wang , Jin-Xing Liu
{"title":"Deep association analysis framework with multi-modal attention fusion for brain imaging genetics","authors":"Shuang-Qing Wang ,&nbsp;Cui-Na Jiao ,&nbsp;Ying-Lian Gao ,&nbsp;Xin-Chun Cui ,&nbsp;Yan-Li Wang ,&nbsp;Jin-Xing Liu","doi":"10.1016/j.media.2025.103827","DOIUrl":"10.1016/j.media.2025.103827","url":null,"abstract":"<div><div>Brain imaging genetics is a crucial technique that integrates analysis of genetic variation and imaging quantitative traits to provide new insights into genetic mechanisms and phenotypic characteristics of the brain. With the advancement of medical imaging technology, correlation analysis between multi-modal imaging and genetic data has gradually gained widespread attention. However, existing methods usually employ simple concatenation to combine multi-modal imaging features, overlooking the interaction and complementary information between modalities. Moreover, traditional correlation analysis is used for the joint study of phenotypic and genotypic, resulting in an incomplete exploration of the complex intrinsic associations between them. Therefore, in this paper, a deep association analysis framework with multi-modal attention fusion (DAAMAF) is proposed for the early diagnosis of Alzheimer’s disease (AD). First, multi-modal feature representations are extracted from the imaging genetics data to achieve nonlinear mapping and obtain enriched information. Then, we design a cross-modal attention network to learn the interaction between multi-modal imaging features for better utilizing their complementary roles in disease diagnosis. Genetic information is mapped onto the imaging representation through a generative network to capture the complicated intrinsic associations between neuroimaging and genetics. Finally, the diagnostic module is utilized for performance analysis and disease-related biomarkers detection. Experiments on the AD Neuroimaging Initiative dataset demonstrate that DAAMAF displays superior performance and discovers biomarkers associated with AD, promising to make a significant contribution to understanding the pathogenesis of the disease. The codes are publicly available at <span><span>https://github.com/Yeah123456ye/DAAMAF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103827"},"PeriodicalIF":11.8,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145258514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cycle-constrained adversarial denoising convolutional network for PET image denoising: Multi-dimensional validation on large datasets with reader study and real low-dose data 用于PET图像去噪的周期约束对抗去噪卷积网络:基于阅读器研究和真实低剂量数据的大数据集的多维验证
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-29 DOI: 10.1016/j.media.2025.103826
Yucun Hou , Fenglin Zhan , Jun Liu , Xin Cheng , Chenxi Li , Ziquan Yuan , Runze Liao , Haihao Wang , Jianlang Hua , Siqi Li , Jing Wu , Jigang Yang , Jianyong Jiang
{"title":"Cycle-constrained adversarial denoising convolutional network for PET image denoising: Multi-dimensional validation on large datasets with reader study and real low-dose data","authors":"Yucun Hou ,&nbsp;Fenglin Zhan ,&nbsp;Jun Liu ,&nbsp;Xin Cheng ,&nbsp;Chenxi Li ,&nbsp;Ziquan Yuan ,&nbsp;Runze Liao ,&nbsp;Haihao Wang ,&nbsp;Jianlang Hua ,&nbsp;Siqi Li ,&nbsp;Jing Wu ,&nbsp;Jigang Yang ,&nbsp;Jianyong Jiang","doi":"10.1016/j.media.2025.103826","DOIUrl":"10.1016/j.media.2025.103826","url":null,"abstract":"<div><div>Positron emission tomography (PET) is a critical tool for diagnosing tumors and neurological disorders but poses radiation risks to patients, particularly to sensitive populations. While reducing injected radiation dose mitigates this risk, it often compromises image quality. To reconstruct full-dose-quality images from low-dose scans, we propose a Cycle-constrained Adversarial Denoising Convolutional Network (Cycle-DCN). This model integrates a noise predictor, two discriminators, and a consistency network, and is optimized using a combination of supervised loss, adversarial loss, cycle consistency loss, identity loss, and neighboring Structural Similarity Index (SSIM) loss. Experiments were conducted on a large dataset consisting of raw PET brain data from 1224 patients, acquired using a Siemens Biograph Vision PET/CT scanner. Each patient underwent a 120-seconds brain scan. To simulate low-dose PET conditions, images were reconstructed from shortened scan durations of 30, 12, and 5 s, corresponding to 1/4, 1/10, and 1/24 of the full-dose acquisition, respectively, using a custom-developed GPU-based image reconstruction software. The results show that Cycle-DCN significantly improves average Peak Signal-to-Noise Ratio (PSNR), SSIM, and Normalized Root Mean Square Error (NRMSE) across three dose levels, with improvements of up to 56%, 35%, and 71%, respectively. Additionally, it achieves contrast-to-noise ratio (CNR) and Edge Preservation Index (EPI) values that closely align with full-dose images, effectively preserving image details, tumor shape, and contrast, while resolving issues with blurred edges. The results of reader studies indicated that the images restored by Cycle-DCN consistently received the highest ratings from nuclear medicine physicians, highlighting their strong clinical relevance. A separate set of 50 whole-body PET datasets acquired using the same Biograph Vision scanner, along with an independent set of 245 whole-body pediatric PET datasets acquired using a Siemens Biograph mCT PET/CT scanner at Beijing Friendship Hospital, further validate the generalizability of the proposed model across different imaging centers, scanner types, scanning mode, patient demographics, and anatomical regions.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103826"},"PeriodicalIF":11.8,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145229515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAGM: Geometry-aware graph matching framework for weakly supervised gyral hinge correspondence 弱监督回转铰对应的几何感知图匹配框架
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-27 DOI: 10.1016/j.media.2025.103820
Zhibin He , Wuyang Li , Tianming Liu , Xiang Li , Junwei Han , Tuo Zhang , Yixuan Yuan
{"title":"GAGM: Geometry-aware graph matching framework for weakly supervised gyral hinge correspondence","authors":"Zhibin He ,&nbsp;Wuyang Li ,&nbsp;Tianming Liu ,&nbsp;Xiang Li ,&nbsp;Junwei Han ,&nbsp;Tuo Zhang ,&nbsp;Yixuan Yuan","doi":"10.1016/j.media.2025.103820","DOIUrl":"10.1016/j.media.2025.103820","url":null,"abstract":"<div><div>Achieving precise alignment of inter-subject brain landmarks, such as the gyral hinge (GH), would enhance the correspondence of brain function across subjects, thereby advancing our understanding of brain anatomy-function relationship and brain mechanisms. Recent methods mainly focus on identifying the correspondences of GHs by utilizing point-to-point ground truth. However, labeling point-to-point GH correspondences between subjects for the entire brain is laborious and time-consuming, given the presence of over 400 GHs per brain. To remedy this problem, we propose a Geometry-Aware Graph Matching framework, dubbed GAGM, for weakly supervised gyral hinge correspondence solely based on brain prior information. Specifically, we propose a Shape-Aware Graph Establishment (SAGE) module to ensure a comprehensive representation of geometry features in GH. SAGE constructs a structured graph by incorporating GH coordinates, shapes, and inter-GH relationships to model entire brain GHs and learns the spatial relation between them. Moreover, to reduce the optimization difficulties, Region-Aware Graph Matching (RAGM) module is proposed for multi-scale matching. RAGM leverages prior knowledge of the multi-scale relationship between GHs and brain regions and incorporates inter-scale semantic consistency to ensure both intra-region consistency and inter-region variability of GH features, ultimately achieving accurate GH matching. Extensive experiments on two public datasets, HCP and CHCP, demonstrate the superiority of our method over state-of-the-art methods. Our code: <span><span>https://github.com/ZhibinHe/GAGM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103820"},"PeriodicalIF":11.8,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145182903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge distillation and teacher–student learning in medical imaging: Comprehensive overview, pivotal role, and future directions 医学影像中的知识升华与师生学习:综合概述、关键作用与未来方向
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-25 DOI: 10.1016/j.media.2025.103819
Xiang Li , Like Li , Minglei Li , Pengfei Yan , Ting Feng , Hao Luo , Yong Zhao , Shen Yin
{"title":"Knowledge distillation and teacher–student learning in medical imaging: Comprehensive overview, pivotal role, and future directions","authors":"Xiang Li ,&nbsp;Like Li ,&nbsp;Minglei Li ,&nbsp;Pengfei Yan ,&nbsp;Ting Feng ,&nbsp;Hao Luo ,&nbsp;Yong Zhao ,&nbsp;Shen Yin","doi":"10.1016/j.media.2025.103819","DOIUrl":"10.1016/j.media.2025.103819","url":null,"abstract":"<div><div>Knowledge Distillation (KD) is a technique to transfer the knowledge from a complex model to a simplified model. It has been widely used in natural language processing and computer vision and has achieved advanced results. Recently, the research of KD in medical image analysis has grown rapidly. The definition of knowledge has been further expanded by combining with the medical field, and its role is not limited to simplifying the model. This paper attempts to comprehensively review the development and application of KD in the medical imaging field. Specifically, we first introduce the basic principles, explain the definition of knowledge and the classical teacher–student network framework. Then, the research progress in medical image classification, segmentation, detection, reconstruction, registration, radiology report generation, privacy protection and other application scenarios is presented. In particular, the introduction of application scenarios is based on the role of KD. We summarize eight main roles of KD techniques in medical image analysis, including model compression, semi-supervised method, weakly supervised method, class balancing, etc. The performance of these roles in all application scenarios is analyzed. Finally, we discuss the challenges in this field and propose potential solutions. KD is still in a rapid development stage in the medical imaging field, we give five potential development directions and research hotspots. A comprehensive literature list of this survey is available at <span><span>https://github.com/XiangQA-Q/KD-in-MIA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103819"},"PeriodicalIF":11.8,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145182904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HSFSurv: A hybrid supervision framework at individual and feature levels for multimodal cancer survival analysis HSFSurv:用于多模式癌症生存分析的个体和特征水平的混合监督框架
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-24 DOI: 10.1016/j.media.2025.103810
Bangkang Fu , Junjie He , Xiaoli Zhang , Yunsong Peng , Zhuxu Zhang , Qi Tang , Xinfeng Liu , Ying Cao , Rongpin Wang
{"title":"HSFSurv: A hybrid supervision framework at individual and feature levels for multimodal cancer survival analysis","authors":"Bangkang Fu ,&nbsp;Junjie He ,&nbsp;Xiaoli Zhang ,&nbsp;Yunsong Peng ,&nbsp;Zhuxu Zhang ,&nbsp;Qi Tang ,&nbsp;Xinfeng Liu ,&nbsp;Ying Cao ,&nbsp;Rongpin Wang","doi":"10.1016/j.media.2025.103810","DOIUrl":"10.1016/j.media.2025.103810","url":null,"abstract":"<div><div>Multimodal data play a significant role in survival analysis, with pathological images providing morphological information about tumors and genomic data offering molecular insights. Leveraging multimodal data for survival analysis has become a prominent research topic. However, the heterogeneity of data poses significant challenges to multimodal integration. While existing methods consider interactions among features from different modalities, the heterogeneity of feature spaces often hinders performance in survival analysis. In this paper, we propose a hybrid supervised framework for survival analysis (HSFSurv) based on multimodal feature decomposition. This framework utilizes a multimodal feature decomposition module to partition features into highly correlated and modality-specific components, facilitating targeted feature fusion in subsequent steps. To alleviate feature space heterogeneity, we design an individual-level uncertainty minimization (UMI) module to ensure consistency in prediction outcomes. Additionally, we develop a feature-level multimodal cohort contrastive learning (MCF) module to enforce consistency across features. Moreover, a probabilistic decay detection module with a supervisory signal is introduced to guide the contrastive learning process. These modules are jointly trained to project multimodal features into a shared latent vector space. Finally, we fine-tune the framework for survival analysis tasks to achieve prognostic predictions. Experimental results on five cancer datasets demonstrate the state-of-the-art performance of the proposed multimodal fusion framework in survival analysis.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103810"},"PeriodicalIF":11.8,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145156556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CaliDiff: Multi-rater annotation calibrating diffusion probabilistic model towards medical image segmentation caldiff:基于多因子标注的医学图像分割扩散概率模型
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-23 DOI: 10.1016/j.media.2025.103812
Junxia Wang , Jing Wang , Jun Ma , Baijing Chen , Zeyuan Chen , Yuanjie Zheng
{"title":"CaliDiff: Multi-rater annotation calibrating diffusion probabilistic model towards medical image segmentation","authors":"Junxia Wang ,&nbsp;Jing Wang ,&nbsp;Jun Ma ,&nbsp;Baijing Chen ,&nbsp;Zeyuan Chen ,&nbsp;Yuanjie Zheng","doi":"10.1016/j.media.2025.103812","DOIUrl":"10.1016/j.media.2025.103812","url":null,"abstract":"<div><div>Medical image segmentation is critical for accurate diagnostics and effective treatment planning. Traditional multi-rater labeling strategies, while integrating consensus from multiple experts, often do not fully capture the unique insights of individual raters. Moreover, deep discriminative models that aggregate such expert labels typically embed inherent biases into the segmentation results. To address these issues, we introduce CaliDiff, a novel multi-rater annotation calibration diffusion probabilistic model. This model effectively approximates the joint probability distribution among multiple expert annotations and their corresponding images, fully leveraging diverse expert knowledge while actively refining these annotations to approximate the true underlying distribution closely. CaliDiff operates through a structured multi-stage process: it begins with a shared-parameter inverse diffusion to normalize initial expert biases, followed by Expertness Consistent Alignment to minimize variance among annotations and enhance consistency in high-confidence areas. Additionally, we incorporate a Committee-based Endogenous Knowledge Learning mechanism that uses adversarial soft supervision to simulate a reliable pseudo-ground truth, integrating Cross-Expert Fusion and Implicit Consensus Inference. Extensive experimental evaluations on various medical image segmentation datasets show that CaliDiff not only significantly improves the calibration of annotations but also achieves state-of-the-art performance, thereby enhancing the reliability and objectivity of medical diagnostics.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103812"},"PeriodicalIF":11.8,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145156552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the performance of medical image segmentation with instructive feature learning 利用指导性特征学习提高医学图像分割性能
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-23 DOI: 10.1016/j.media.2025.103818
Duwei Dai , Caixia Dong , Haolin Huang , Fan Liu , Zongfang Li , Songhua Xu
{"title":"Improving the performance of medical image segmentation with instructive feature learning","authors":"Duwei Dai ,&nbsp;Caixia Dong ,&nbsp;Haolin Huang ,&nbsp;Fan Liu ,&nbsp;Zongfang Li ,&nbsp;Songhua Xu","doi":"10.1016/j.media.2025.103818","DOIUrl":"10.1016/j.media.2025.103818","url":null,"abstract":"<div><div>Although deep learning models have greatly automated medical image segmentation, they still struggle with complex samples, especially those with irregular shapes, notable scale variations, or blurred boundaries. One key reason for this is that existing methods often overlook the importance of identifying and enhancing the instructive features tailored to various targets, thereby impeding optimal feature extraction and transmission. To address these issues, we propose two innovative modules: an Instructive Feature Enhancement Module (IFEM) and an Instructive Feature Integration Module (IFIM). IFEM synergistically captures rich detailed information and local contextual cues within a unified convolutional module through flexible resolution scaling and extensive information interplay, thereby enhancing the network’s feature extraction capabilities. Meanwhile, IFIM explicitly guides the fusion of encoding–decoding features to create more discriminative representations through sensitive intermediate predictions and omnipresent attention operations, thus refining contextual feature transmission. These two modules can be seamlessly integrated into existing segmentation frameworks, significantly boosting their performance. Furthermore, to achieve superior performance with substantially reduced computational demands, we develop an effective and efficient segmentation framework (EESF). Unlike traditional U-Nets, EESF adopts a shallower and wider asymmetric architecture, achieving a better balance between fine-grained information retention and high-order semantic abstraction with minimal learning parameters. Ultimately, by incorporating IFEM and IFIM into EESF, we construct EE-Net, a high-performance and low-resource segmentation network. Extensive experiments across six diverse segmentation tasks consistently demonstrate that EE-Net outperforms a wide range of competing methods in terms of segmentation performance, computational efficiency, and learning ability. The code is available at <span><span>https://github.com/duweidai/EE-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103818"},"PeriodicalIF":11.8,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145156555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信