Medical image analysis最新文献

筛选
英文 中文
Deep implicit optimization enables robust learnable features for deformable image registration 深度隐式优化为可变形图像配准提供了鲁棒的可学习特征
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-05-02 DOI: 10.1016/j.media.2025.103577
Rohit Jena , Pratik Chaudhari , James C. Gee
{"title":"Deep implicit optimization enables robust learnable features for deformable image registration","authors":"Rohit Jena ,&nbsp;Pratik Chaudhari ,&nbsp;James C. Gee","doi":"10.1016/j.media.2025.103577","DOIUrl":"10.1016/j.media.2025.103577","url":null,"abstract":"<div><div>Deep Learning in Image Registration (DLIR) methods have been tremendously successful in image registration due to their speed and ability to incorporate weak label supervision at training time. However, existing DLIR methods forego many of the benefits and invariances of optimization methods. The lack of a task-specific inductive bias in DLIR methods leads to suboptimal performance, especially in the presence of domain shift. Our method aims to bridge this gap between statistical learning and optimization by explicitly incorporating optimization as a layer in a deep network. A deep network is trained to predict multi-scale dense feature images that are registered using a black box iterative optimization solver. This optimal warp is then used to minimize image and label alignment errors. By <em>implicitly</em> differentiating end-to-end through an iterative optimization solver, we <em>explicitly</em> exploit invariances of the correspondence matching problem induced by the optimization, while learning registration and label-aware features, and guaranteeing the warp functions to be a local minima of the registration objective in the feature space. Our framework shows excellent performance on in-domain datasets, and is agnostic to domain shift such as anisotropy and varying intensity profiles. For the first time, our method allows switching between arbitrary transformation representations (free-form to diffeomorphic) at test time with zero retraining. End-to-end feature learning also facilitates interpretability of features and arbitrary test-time regularization, which is not possible with existing DLIR methods.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103577"},"PeriodicalIF":10.7,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143906845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structural uncertainty estimation for medical image segmentation 医学图像分割中的结构不确定度估计
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-28 DOI: 10.1016/j.media.2025.103602
Bing Yang , Xiaoqing Zhang , Huihong Zhang , Sanqian Li , Risa Higashita , Jiang Liu
{"title":"Structural uncertainty estimation for medical image segmentation","authors":"Bing Yang ,&nbsp;Xiaoqing Zhang ,&nbsp;Huihong Zhang ,&nbsp;Sanqian Li ,&nbsp;Risa Higashita ,&nbsp;Jiang Liu","doi":"10.1016/j.media.2025.103602","DOIUrl":"10.1016/j.media.2025.103602","url":null,"abstract":"<div><div>Precise segmentation and uncertainty estimation are crucial for error identification and correction in medical diagnostic assistance. Existing methods mainly rely on pixel-wise uncertainty estimations. They (1) neglect the global context, leading to erroneous uncertainty indications, and (2) bring attention interference, resulting in the waste of extensive details and potential understanding confusion. In this paper, we propose a novel structural uncertainty estimation method, based on Convolutional Neural Networks (CNN) and Active Shape Models (ASM), named SU-ASM, which incorporates global shape information for providing precise segmentation and uncertainty estimation. The SU-ASM consists of three components. Firstly, multi-task generation provides multiple outcomes to assist ASM initialization and shape optimization via a multi-task learning module. Secondly, information fusion involves the creation of a Combined Boundary Probability (CBP) and along with a rapid shape initialization algorithm, Key Landmark Template Matching (KLTM), to enhance boundary reliability and select appropriate shape templates. Finally, shape model fitting where multiple shape templates are matched to the CBP while maintaining their intrinsic shape characteristics. Fitted shapes generate segmentation results and structural uncertainty estimations. The SU-ASM has been validated on cardiac ultrasound dataset, ciliary muscle dataset of the anterior eye segment, and the chest X-ray dataset. It outperforms state-of-the-art methods in terms of segmentation and uncertainty estimation.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103602"},"PeriodicalIF":10.7,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143887922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MED-NCA: Bio-inspired medical image segmentation MED-NCA:仿生医学图像分割
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-28 DOI: 10.1016/j.media.2025.103601
John Kalkhof, Niklas Ihm, Tim Köhler, Bjarne Gregori, Anirban Mukhopadhyay
{"title":"MED-NCA: Bio-inspired medical image segmentation","authors":"John Kalkhof,&nbsp;Niklas Ihm,&nbsp;Tim Köhler,&nbsp;Bjarne Gregori,&nbsp;Anirban Mukhopadhyay","doi":"10.1016/j.media.2025.103601","DOIUrl":"10.1016/j.media.2025.103601","url":null,"abstract":"<div><div>The reliance on computationally intensive U-Net and Transformer architectures significantly limits their accessibility in low-resource environments, creating a technological divide that hinders global healthcare equity, especially in medical diagnostics and treatment planning. This divide is most pronounced in low- and middle-income countries, primary care facilities, and conflict zones. We introduced MED-NCA, Neural Cellular Automata (NCA) based segmentation models characterized by their low parameter count, robust performance, and inherent quality control mechanisms. These features drastically lower the barriers to high-quality medical image analysis in resource-constrained settings, allowing the models to run efficiently on hardware as minimal as a Raspberry Pi or a smartphone. Building upon the foundation laid by MED-NCA, this paper extends its validation across eight distinct anatomies, including the hippocampus and prostate (MRI, 3D), liver and spleen (CT, 3D), heart and lung (X-ray, 2D), breast tumor (Ultrasound, 2D), and skin lesion (Image, 2D). Our comprehensive evaluation demonstrates the broad applicability and effectiveness of MED-NCA in various medical imaging contexts, matching the performance of two magnitudes larger UNet models. Additionally, we introduce NCA-VIS, a visualization tool that gives insight into the inference process of MED-NCA and allows users to test its robustness by applying various artifacts. This combination of efficiency, broad applicability, and enhanced interpretability makes MED-NCA a transformative solution for medical image analysis, fostering greater global healthcare equity by making advanced diagnostics accessible in even the most resource-limited environments.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103601"},"PeriodicalIF":10.7,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143903455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Medical image translation with deep learning: Advances, datasets and perspectives 医学图像翻译与深度学习:进展,数据集和观点
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-27 DOI: 10.1016/j.media.2025.103605
Junxin Chen , Zhiheng Ye , Renlong Zhang , Hao Li , Bo Fang , Li-bo Zhang , Wei Wang
{"title":"Medical image translation with deep learning: Advances, datasets and perspectives","authors":"Junxin Chen ,&nbsp;Zhiheng Ye ,&nbsp;Renlong Zhang ,&nbsp;Hao Li ,&nbsp;Bo Fang ,&nbsp;Li-bo Zhang ,&nbsp;Wei Wang","doi":"10.1016/j.media.2025.103605","DOIUrl":"10.1016/j.media.2025.103605","url":null,"abstract":"<div><div>Traditional medical image generation often lacks patient-specific clinical information, limiting its clinical utility despite enhancing downstream task performance. In contrast, medical image translation precisely converts images from one modality to another, preserving both anatomical structures and cross-modal features, thus enabling efficient and accurate modality transfer and offering unique advantages for model development and clinical practice. This paper reviews the latest advancements in deep learning(DL)-based medical image translation. Initially, it elaborates on the diverse tasks and practical applications of medical image translation. Subsequently, it provides an overview of fundamental models, including convolutional neural networks (CNNs), transformers, and state space models (SSMs). Additionally, it delves into generative models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Autoregressive Models (ARs), diffusion Models, and flow Models. Evaluation metrics for assessing translation quality are discussed, emphasizing their importance. Commonly used datasets in this field are also analyzed, highlighting their unique characteristics and applications. Looking ahead, the paper identifies future trends, challenges, and proposes research directions and solutions in medical image translation. It aims to serve as a valuable reference and inspiration for researchers, driving continued progress and innovation in this area.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103605"},"PeriodicalIF":10.7,"publicationDate":"2025-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143890509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
General retinal image enhancement via reconstruction: Bridging distribution shifts using latent diffusion adaptors 通过重建的一般视网膜图像增强:使用潜在扩散适配器桥接分布移位
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-26 DOI: 10.1016/j.media.2025.103603
Bingyu Yang, Haonan Han, Weihang Zhang, Huiqi Li
{"title":"General retinal image enhancement via reconstruction: Bridging distribution shifts using latent diffusion adaptors","authors":"Bingyu Yang,&nbsp;Haonan Han,&nbsp;Weihang Zhang,&nbsp;Huiqi Li","doi":"10.1016/j.media.2025.103603","DOIUrl":"10.1016/j.media.2025.103603","url":null,"abstract":"<div><div>Deep learning-based fundus image enhancement has attracted extensive research attention recently, which has shown remarkable effectiveness in improving the visibility of low-quality images. However, these methods are often constrained to specific datasets and degradations, leading to poor generalization capabilities and having challenges in the fine-tuning process. Therefore, a general method for fundus image enhancement is proposed for improved generalizability and flexibility, which decomposes the enhancement task into reconstruction and adaptation phases. In the reconstruction phase, self-supervised training with unpaired data is employed, allowing the utilization of extensive public datasets to improve the generalizability of the model. During the adaptation phase, the model is fine-tuned according to the target datasets and their degradations, utilizing the pre-trained weights from the reconstruction. The proposed method improves the feasibility of latent diffusion models for retinal image enhancement. Adaptation loss and enhancement adaptor are proposed in autoencoders and diffusion networks for fewer paired training data, fewer trainable parameters, and faster convergence compared with training from scratch. The proposed method can be easily fine-tuned and experiments demonstrate the adaptability for different datasets and degradations. Additionally, the reconstruction-adaptation framework can be utilized in different backbones and other modalities, which shows its generality.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103603"},"PeriodicalIF":10.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143878865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lung structure and function information-guided residual diffusion model for predicting idiopathic pulmonary fibrosis progression 预测特发性肺纤维化进展的肺结构和功能信息导向残留扩散模型
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-26 DOI: 10.1016/j.media.2025.103604
Caiwen Jiang , Xiaodan Xing , Yang Nan , Yingying Fang , Sheng Zhang , Simon Walsh , Guang Yang , Dinggang Shen
{"title":"A lung structure and function information-guided residual diffusion model for predicting idiopathic pulmonary fibrosis progression","authors":"Caiwen Jiang ,&nbsp;Xiaodan Xing ,&nbsp;Yang Nan ,&nbsp;Yingying Fang ,&nbsp;Sheng Zhang ,&nbsp;Simon Walsh ,&nbsp;Guang Yang ,&nbsp;Dinggang Shen","doi":"10.1016/j.media.2025.103604","DOIUrl":"10.1016/j.media.2025.103604","url":null,"abstract":"<div><div>Idiopathic Pulmonary Fibrosis (IPF) is a progressive lung disease that continuously scars and thickens lung tissue, leading to respiratory difficulties. Timely assessment of IPF progression is essential for developing treatment plans and improving patient survival rates. However, current clinical standards require multiple (usually two) CT scans at certain intervals to assess disease progression. This presents a dilemma: <em>the disease progression is identified only after the disease has already progressed</em>. To address this issue, a feasible solution is to generate the follow-up CT image from the patient’s initial CT image to achieve early prediction of IPF. To this end, we propose a lung structure and function information-guided residual diffusion model. The key components of our model include (1) using a 2.5D generation strategy to reduce computational cost of generating 3D images with the diffusion model; (2) designing structural attention to mitigate negative impact of spatial misalignment between the two CT images on generation performance; (3) employing residual diffusion to accelerate model training and inference while focusing more on differences between the two CT images (i.e., the lesion areas); and (4) developing a CLIP-based text extraction module to extract lung function test information and further using such extracted information to guide the generation. Extensive experiments demonstrate that our method can effectively predict IPF progression and achieve superior generation performance compared to state-of-the-art methods.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103604"},"PeriodicalIF":10.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143890511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Recon-all-clinical”: Cortical surface reconstruction and analysis of heterogeneous clinical brain MRI “recon -all-临床”:脑皮质表面重建及异质临床MRI分析
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-26 DOI: 10.1016/j.media.2025.103608
Karthik Gopinath , Douglas N. Greve , Colin Magdamo , Steve Arnold , Sudeshna Das , Oula Puonti , Juan Eugenio Iglesias , Alzheimer’s Disease Neuroimaging Initiative
{"title":"“Recon-all-clinical”: Cortical surface reconstruction and analysis of heterogeneous clinical brain MRI","authors":"Karthik Gopinath ,&nbsp;Douglas N. Greve ,&nbsp;Colin Magdamo ,&nbsp;Steve Arnold ,&nbsp;Sudeshna Das ,&nbsp;Oula Puonti ,&nbsp;Juan Eugenio Iglesias ,&nbsp;Alzheimer’s Disease Neuroimaging Initiative","doi":"10.1016/j.media.2025.103608","DOIUrl":"10.1016/j.media.2025.103608","url":null,"abstract":"<div><div>Surface-based analysis of the cerebral cortex is ubiquitous in human neuroimaging with MRI. It is crucial for tasks like cortical registration, parcellation, and thickness estimation. Traditionally, such analyses require high-resolution, isotropic scans with good gray–white matter contrast, typically a T1-weighted scan with 1 mm resolution. This requirement precludes application of these techniques to most MRI scans acquired for clinical purposes, since they are often anisotropic and lack the required T1-weighted contrast. To overcome this limitation and enable large-scale neuroimaging studies using vast amounts of existing clinical data, we introduce <em>recon-all-clinical</em>, a novel methodology for cortical reconstruction, registration, parcellation, and thickness estimation for clinical brain MRI scans of any resolution and contrast. Our approach employs a hybrid analysis method that combines a convolutional neural network (CNN) trained with domain randomization to predict signed distance functions (SDFs), and classical geometry processing for accurate surface placement while maintaining topological and geometric constraints. The method does not require retraining for different acquisitions, thus simplifying the analysis of heterogeneous clinical datasets. We evaluated <em>recon-all-clinical</em> on multiple public datasets like ADNI, HCP, AIBL, OASIS and including a large clinical dataset of over 9,500 scans. The results indicate that our method produces geometrically precise cortical reconstructions across different MRI contrasts and resolutions, consistently achieving high accuracy in parcellation. Cortical thickness estimates are precise enough to capture aging effects, independently of MRI contrast, even though accuracy varies with slice thickness. Our method is publicly available at <span><span>https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all-clinical</span><svg><path></path></svg></span>, enabling researchers to perform detailed cortical analysis on the huge amounts of already existing clinical MRI scans. This advancement may be particularly valuable for studying rare diseases and underrepresented populations where research-grade MRI data is scarce.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103608"},"PeriodicalIF":10.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143881698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ProtoASNet: Comprehensive evaluation and enhanced performance with uncertainty estimation for aortic stenosis classification in echocardiography ProtoASNet:超声心动图中主动脉狭窄分类的综合评价和增强的不确定性估计性能
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-24 DOI: 10.1016/j.media.2025.103600
Ang Nan Gu , Hooman Vaseli , Michael Y. Tsang , Victoria Wu , S. Neda Ahmadi Amiri , Nima Kondori , Andrea Fung , Teresa S.M. Tsang , Purang Abolmaesumi
{"title":"ProtoASNet: Comprehensive evaluation and enhanced performance with uncertainty estimation for aortic stenosis classification in echocardiography","authors":"Ang Nan Gu ,&nbsp;Hooman Vaseli ,&nbsp;Michael Y. Tsang ,&nbsp;Victoria Wu ,&nbsp;S. Neda Ahmadi Amiri ,&nbsp;Nima Kondori ,&nbsp;Andrea Fung ,&nbsp;Teresa S.M. Tsang ,&nbsp;Purang Abolmaesumi","doi":"10.1016/j.media.2025.103600","DOIUrl":"10.1016/j.media.2025.103600","url":null,"abstract":"<div><div>Aortic stenosis (AS) is a prevalent heart valve disease that requires accurate and timely diagnosis for effective treatment. Current methods for automated AS severity classification rely on black-box deep learning techniques, which suffer from a low level of trustworthiness and hinder clinical adoption. To tackle this challenge, we propose ProtoASNet, a prototype-based neural network designed to classify the severity of AS from B-mode echocardiography videos. ProtoASNet bases its predictions exclusively on the similarity scores between the input and a set of learned spatio-temporal prototypes, ensuring inherent interpretability. Users can directly visualize the similarity between the input and each prototype, as well as the weighted sum of similarities. This approach provides clinically relevant evidence for each prediction, as the prototypes typically highlight markers such as calcification and restricted movement of aortic valve leaflets. Moreover, ProtoASNet utilizes abstention loss to estimate aleatoric uncertainty by defining a set of prototypes that capture ambiguity and insufficient information in the observed data. This feature augments prototype-based models with the ability to explain when they may fail. We evaluate ProtoASNet on a private dataset and the publicly available TMED-2 dataset. It surpasses existing state-of-the-art methods, achieving a balanced accuracy of 80.0% on our private dataset and 79.7% on the TMED-2 dataset, respectively. By discarding cases flagged as uncertain, ProtoASNet achieves an improved balanced accuracy of 82.4% on our private dataset. Furthermore, by offering interpretability and an uncertainty measure for each prediction, ProtoASNet improves transparency and facilitates the interactive usage of deep networks in aiding clinical decision-making. Our source code is available at: <span><span>https://github.com/hooman007/ProtoASNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103600"},"PeriodicalIF":10.7,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143903454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NuHTC: A hybrid task cascade for nuclei instance segmentation and classification NuHTC:一个用于核实例分割和分类的混合任务级联
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-23 DOI: 10.1016/j.media.2025.103595
Bao Li , Zhenyu Liu , Song Zhang , Xiangyu Liu , Caixia Sun , Jiangang Liu , Bensheng Qiu , Jie Tian
{"title":"NuHTC: A hybrid task cascade for nuclei instance segmentation and classification","authors":"Bao Li ,&nbsp;Zhenyu Liu ,&nbsp;Song Zhang ,&nbsp;Xiangyu Liu ,&nbsp;Caixia Sun ,&nbsp;Jiangang Liu ,&nbsp;Bensheng Qiu ,&nbsp;Jie Tian","doi":"10.1016/j.media.2025.103595","DOIUrl":"10.1016/j.media.2025.103595","url":null,"abstract":"<div><div>Nuclei instance segmentation and classification of hematoxylin and eosin (H&amp;E) stained digital pathology images are essential for further downstream cancer diagnosis and prognosis tasks. Previous works mainly focused on bottom-up methods using a single-level feature map for segmenting nuclei instances, while multilevel feature maps seemed to be more suitable for nuclei instances with various sizes and types. In this paper, we develop an effective top-down nuclei instance segmentation and classification framework (NuHTC) based on a hybrid task cascade (HTC). The NuHTC has two new components: a watershed proposal network (WSPN) and a hybrid feature extractor (HFE). The WSPN can provide additional proposals for the region proposal network which leads the model to predict bounding boxes more precisely. The HFE at the region of interest (RoI) alignment stage can better utilize both the high-level global and the low-level semantic features. It can guide NuHTC to learn nuclei instance features with less intraclass variance. We conduct extensive experiments using our method in four public multiclass nuclei instance segmentation datasets. The quantitative results of NuHTC demonstrate its superiority in both instance segmentation and classification compared to other state-of-the-art methods.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103595"},"PeriodicalIF":10.7,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MVNMF: Multiview nonnegative matrix factorization for radio-multigenomic analysis in breast cancer prognosis MVNMF:用于乳腺癌预后的放射-多基因组分析的多视角非阴性矩阵分解
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-22 DOI: 10.1016/j.media.2025.103566
Jian Guan , Ming Fan , Lihua Li
{"title":"MVNMF: Multiview nonnegative matrix factorization for radio-multigenomic analysis in breast cancer prognosis","authors":"Jian Guan ,&nbsp;Ming Fan ,&nbsp;Lihua Li","doi":"10.1016/j.media.2025.103566","DOIUrl":"10.1016/j.media.2025.103566","url":null,"abstract":"<div><div>Radiogenomic research provides a deeper understanding of breast cancer biology by investigating the correlations between imaging phenotypes and genetic data. However, current radiogenomic research primarily focuses on the correlation between imaging phenotypes and single-genomic data (e.g., gene expression data), overlooking the potential of multi-genomics data to unveil more nuances in cancer characterization. To this end, we propose a multiview nonnegative matrix factorization (MVNMF) method for the radio-multigenomic analysis that identifies dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) features associated with multi-genomics data, including DNA copy number alterations, mutations, and mRNAs, each of which is independently predictive of cancer outcomes. MVNMF incorporates subspace learning and multiview regularization into a unified model to simultaneously select features and explore correlations. Subspace learning is utilized to identify representative radiomic features crucial for tumor analysis, while multiview regularization enables the learning of the correlation between the identified radiomic features and multi-genomics data. Experimental results showed that, for overall survival prediction in breast cancer, MVNMF classified patients into two distinct groups characterized by significant differences in survival (p = 0.0012). Furthermore, it achieved better performance with a C-index of 0.698 compared to the method without considering any genomics data (C-index = 0.528). MVNMF is an effective framework for identifying radiomic features linked to multi-genomics data, which improves its predictive power and provides a better understanding of the biological mechanisms underlying observed phenotypes. MVNMF offers a novel framework for prognostic prediction in breast cancer, with the potential to catalyze further radiogenomic/radio-multigenomic studies.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103566"},"PeriodicalIF":10.7,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信