Medical image analysis最新文献

筛选
英文 中文
Channel-wise joint disentanglement representation learning for B-mode and super-resolution ultrasound based CAD of breast cancer 基于b模式和超分辨率超声的乳腺癌CAD的通道联合解缠表示学习
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-22 DOI: 10.1016/j.media.2026.103957
Yuhang Zheng , Jiale Xu , Qing Hua , Xiaohong Jia , Xueqin Hou , Yanfeng Yao , Zheng Wei , Yulu Zhang , Fanggang Wu , Wei Guo , Yuan Tian , Jun Wang , Shujun Xia , Yijie Dong , Jun Shi , Jianqiao Zhou
{"title":"Channel-wise joint disentanglement representation learning for B-mode and super-resolution ultrasound based CAD of breast cancer","authors":"Yuhang Zheng ,&nbsp;Jiale Xu ,&nbsp;Qing Hua ,&nbsp;Xiaohong Jia ,&nbsp;Xueqin Hou ,&nbsp;Yanfeng Yao ,&nbsp;Zheng Wei ,&nbsp;Yulu Zhang ,&nbsp;Fanggang Wu ,&nbsp;Wei Guo ,&nbsp;Yuan Tian ,&nbsp;Jun Wang ,&nbsp;Shujun Xia ,&nbsp;Yijie Dong ,&nbsp;Jun Shi ,&nbsp;Jianqiao Zhou","doi":"10.1016/j.media.2026.103957","DOIUrl":"10.1016/j.media.2026.103957","url":null,"abstract":"<div><div>B-mode ultrasound (BUS) is widely used in breast cancer diagnosis, while the emerging super-resolution ultrasound (SRUS) provides microvascular information with high spatial resolution, which has shown great potential in improving breast cancer diagnosis. However, as a new ultrasound modality, its diagnosis remains highly dependent on the clinical experience of sonologists, highlighting the need for reliable computer-aided diagnosis (CAD) approaches. In this work, a novel dual-branch network with a <strong>C</strong>hannel-<strong>W</strong>ise <strong>J</strong>oint <strong>D</strong>isentanglement <strong>R</strong>epresentation <strong>L</strong>earning (CW-JDRL) method is proposed for the multimodal ultrasound-based CAD of breast cancer, where one branch processes BUS and the other analyzes multimodal SRUS data. The CW-JDRL is implemented on the SRUS branch by grouping the final-layer network channels to capture both common and specific properties. It consists of two modules, namely Gradient-guided Disentanglement (GD) module and Gramian-based Contrastive Learning Disentanglement (GCLD) module. The former disentangles with gradient guidance to encourage consistency among common channels and distinctiveness among specific ones, and the latter disentangles common and specific representations by integrating them into a unified contrastive objective. Extensive experiments on a multicenter SRUS dataset demonstrate that the proposed dual-branch network with CW-JDRL achieves superior performance over the compared algorithms and maintains robust generalizability to external data. It suggests not only the effectiveness of SRUS for diagnosis of breast cancer, but also the potential of the proposed CAD model in clinical practice. The codes are publicly available at <span><span>https://github.com/Zyh-AIUltra/CW-JDRL#</span><svg><path></path></svg></span></div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103957"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146032815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-informed graph neural networks for flow field estimation in carotid arteries 用于颈动脉流场估计的物理信息图神经网络
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-02-07 DOI: 10.1016/j.media.2026.103974
Julian Suk , Dieuwertje Alblas , Barbara A. Hutten , Albert Wiegman , Christoph Brune , Pim van Ooij , Jelmer M. Wolterink
{"title":"Physics-informed graph neural networks for flow field estimation in carotid arteries","authors":"Julian Suk ,&nbsp;Dieuwertje Alblas ,&nbsp;Barbara A. Hutten ,&nbsp;Albert Wiegman ,&nbsp;Christoph Brune ,&nbsp;Pim van Ooij ,&nbsp;Jelmer M. Wolterink","doi":"10.1016/j.media.2026.103974","DOIUrl":"10.1016/j.media.2026.103974","url":null,"abstract":"<div><div>Hemodynamic quantities are valuable biomedical risk factors for cardiovascular pathology such as atherosclerosis. Non-invasive, in-vivo measurement of these quantities can only be performed using a select number of modalities that are not widely available, such as 4D flow magnetic resonance imaging (MRI). In this work, we create a surrogate model for hemodynamic flow field estimation, powered by machine learning. We train graph neural networks that include priors about the underlying symmetries and physics, limiting the amount of data required for training. This allows us to train the model using moderately-sized, in-vivo 4D flow MRI datasets, instead of large in-silico datasets obtained by computational fluid dynamics (CFD), as is the current standard. We create an efficient, equivariant neural network by combining the popular PointNet++ architecture with group-steerable layers. To incorporate the physics-informed priors, we derive an efficient discretisation scheme for the involved differential operators. We perform extensive experiments in carotid arteries and show that our model can accurately estimate low-noise hemodynamic flow fields in the carotid artery. Moreover, we show how the learned relation between geometry and hemodynamic quantities transfers to 3D vascular models obtained using a different imaging modality than the training data. This shows that physics-informed graph neural networks can be trained using 4D flow MRI data to estimate blood flow in unseen carotid artery geometries.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103974"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146138282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
4D monocular surgical reconstruction under arbitrary camera motions 任意摄像机运动下的四维单眼手术重建
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-02-11 DOI: 10.1016/j.media.2026.103989
Jiwei Shan , Zeyu Cai , Cheng-Tai Hsieh , Yirui Li , Hao Liu , Lijun Han , Hesheng Wang , Shing Shin Cheng
{"title":"4D monocular surgical reconstruction under arbitrary camera motions","authors":"Jiwei Shan ,&nbsp;Zeyu Cai ,&nbsp;Cheng-Tai Hsieh ,&nbsp;Yirui Li ,&nbsp;Hao Liu ,&nbsp;Lijun Han ,&nbsp;Hesheng Wang ,&nbsp;Shing Shin Cheng","doi":"10.1016/j.media.2026.103989","DOIUrl":"10.1016/j.media.2026.103989","url":null,"abstract":"<div><div>Reconstructing deformable surgical scenes from endoscopic videos is a challenging task with important clinical applications. Recent state-of-the-art approaches, such as those based on implicit neural representations or 3D Gaussian splatting, have made notable progress in this area. However, most existing methods are designed for deformable scenes with fixed endoscope viewpoints and rely on stereo depth priors or accurate structure-from-motion for both initialization and optimization. This limits their ability to handle monocular sequences with large camera movements, restricting their use in real clinical settings. To address these limitations, we propose Local-EndoGS, a high-quality 4D reconstruction framework for monocular endoscopic sequences with arbitrary camera motion. Local-EndoGS introduces a progressive, window-based global scene representation that allocates local deformable scene representations for each observed window, enabling scalability to long sequences with substantial camera movement. To overcome unreliable initialization due to the lack of stereo depth or accurate structure-from-motion, we propose a coarse-to-fine initialization strategy that integrates multi-view geometry, cross-window information, and monocular depth priors, providing a robust foundation for subsequent optimization. In addition, we incorporate long-range 2D pixel trajectory constraints and physical motion priors to improve the physical plausibility of the recovered deformations. We comprehensively evaluate Local-EndoGS on three public endoscopic datasets with deformable scenes and varying camera motions. Local-EndoGS achieves superior performance in both appearance quality and geometry, consistently outperforming state-of-the-art methods. Extensive ablation studies further validate the effectiveness of our key designs. Our code will be released upon acceptance at <span><span>https://github.com/IRMVLab/Local-EndoGS</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103989"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146160397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A navigation-guided 3D breast ultrasound scanning and reconstruction system for automated multi-lesion spatial localization and diagnosis 一种用于多病灶自动定位与诊断的导航引导三维乳腺超声扫描与重建系统
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-28 DOI: 10.1016/j.media.2026.103965
Yi Zhang , Yulin Yan , Kun Wang , Muyu Cai , Yifei Xiang , Yan Guo , Puxun Tu , Tao Ying , Xiaojun Chen
{"title":"A navigation-guided 3D breast ultrasound scanning and reconstruction system for automated multi-lesion spatial localization and diagnosis","authors":"Yi Zhang ,&nbsp;Yulin Yan ,&nbsp;Kun Wang ,&nbsp;Muyu Cai ,&nbsp;Yifei Xiang ,&nbsp;Yan Guo ,&nbsp;Puxun Tu ,&nbsp;Tao Ying ,&nbsp;Xiaojun Chen","doi":"10.1016/j.media.2026.103965","DOIUrl":"10.1016/j.media.2026.103965","url":null,"abstract":"<div><div>Handheld ultrasound (HHUS) is indispensable for breast cancer screening but remains compromised by operator-dependent acquisition, subjective 2D interpretation and clock-face annotation. Existing spatial tracking systems for HHUS typically lack integration, adaptability, flexibility, and robust 3D representation. Additionally, current deep learning diagnostic methods are predominantly based on single ultrasound images, whereas video-based malignancy classification approaches suffer from limited temporal interpretability. In this study, we develop an intelligent navigation-guided breast ultrasound scanning system delivering seamless 3D reconstruction, nipple-centric lesion localization, and video-based malignancy prediction with full adaptation to the routine workflow. Specifically, a Hybrid Lesion-informed Spatiotemporal Transformer (HLST) is proposed to selectively fuse intra- and peri-lesional dynamics augmented from a prompt-driven BUS-SAM-2 foundation model for sequence-level classification. Moreover, a geometry-adaptive clock projection and analysis method is designed to enable automated standardized clock-face orientation and lesion-to-nipple distance measurement for breasts of arbitrary shape, eliminating patient-attached fiducials or pre-marked landmarks. Validation on three breast phantoms demonstrated high correlations with CT reference (<em>r</em> &gt; 0.99 for distance, <em>r</em> &gt; 0.97 for 3D size, and <span><math><mrow><mi>r</mi><mo>=</mo><mn>1.00</mn></mrow></math></span> for clockwise angle, <em>p</em> &lt; 0.0001). Clinical evaluation in 43 female patients (30 abnormal breasts) yielded median clock-face orientation and size discrepancies of 0 h and 0.7 mm  ×  0.6 mm, respectively, versus conventional reports. Meanwhile, HLST achieved superior performance (86.1% accuracy) on the BUV dataset. By coupling precise 3D spatial annotation with foundation-model-enhanced spatiotemporal characterization, the proposed system offers a reliable, streamlined workflow that standardizes follow-up, guides biopsies, and promotes diagnostic confidence in HHUS practice.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103965"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146072194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DPFR: Semi-supervised gland segmentation via density perturbation and feature recalibration DPFR:通过密度扰动和特征重新校准的半监督腺体分割
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-27 DOI: 10.1016/j.media.2026.103962
Jiejiang Yu, Yu Liu
{"title":"DPFR: Semi-supervised gland segmentation via density perturbation and feature recalibration","authors":"Jiejiang Yu,&nbsp;Yu Liu","doi":"10.1016/j.media.2026.103962","DOIUrl":"10.1016/j.media.2026.103962","url":null,"abstract":"<div><div>In recent years, semi-supervised methods have attracted considerable attention in gland segmentation of histopathological images, as they can substantially reduce the annotation data burden for pathologists. The most widely adopted approach is the Mean-Teacher framework based on consistency regularization, which exploits unlabeled data information through consistency regularization constraints. However, due to the morphological complexity of glands in histopathological images, existing methods still suffer from confusion between glands and background, as well as gland adhesion. To address these challenges, we propose a semi-supervised gland segmentation method based on Density Perturbation and Feature Recalibration (DPFR). Specifically, we first design a normalized flow-based density estimator to effectively model the feature density distributions of glands, contours, and background. The gradient information of the estimator is then exploited to determine the descent direction in low-density regions, along which perturbations are applied to enhance feature discriminability. Furthermore, a contrastive-learning-based feature recalibration module is designed to alleviate inter-class distribution confusion, thereby improving gland-background separability and mitigating gland adhesion. Extensive experiments on three public gland segmentation datasets demonstrate that the proposed method consistently outperforms existing semi-supervised approaches, achieving state-of-the-art performance with a substantial margin. The code repository address is <span><span>https://github.com/Methow0/DPFR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103962"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146072196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incorporating global-local tissue changes to predict future breast cancer from longitudinal screening mammograms 结合整体-局部组织变化,通过乳房x线纵向筛查预测未来乳腺癌
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-02-16 DOI: 10.1016/j.media.2026.103990
Xin Wang , Tao Tan , Yuan Gao , Eric Marcus , Hong-Yu Zhou , Chunyao Lu , Luyi Han , Antonio Portaluri , Ruisheng Su , Tianyu Zhang , Xinglong Liang , Regina Beets-Tan , Katja Pinker , Yue Sun , Ritse Mann , Jonas Teuwen
{"title":"Incorporating global-local tissue changes to predict future breast cancer from longitudinal screening mammograms","authors":"Xin Wang ,&nbsp;Tao Tan ,&nbsp;Yuan Gao ,&nbsp;Eric Marcus ,&nbsp;Hong-Yu Zhou ,&nbsp;Chunyao Lu ,&nbsp;Luyi Han ,&nbsp;Antonio Portaluri ,&nbsp;Ruisheng Su ,&nbsp;Tianyu Zhang ,&nbsp;Xinglong Liang ,&nbsp;Regina Beets-Tan ,&nbsp;Katja Pinker ,&nbsp;Yue Sun ,&nbsp;Ritse Mann ,&nbsp;Jonas Teuwen","doi":"10.1016/j.media.2026.103990","DOIUrl":"10.1016/j.media.2026.103990","url":null,"abstract":"<div><div>Early detection of breast cancer (BC) through mammography screening is critical for reducing mortality and improving patient outcomes. However, full-population-based, age-driven screening might not lead to optimal resource use and may enlarge screening associated harms in low risk women. Accurate and interpretable BC risk prediction is essential to improve strategies and make screening more personalized. Although recent deep learning models have shown promise in leveraging mammograms for risk stratification, challenges remain in interpretable modeling of temporal changes, efficiently capturing multi-scale risk tissue features from large-scale images, and precise time prediction to enhance clinical interpretability. In this study, we propose <u>T</u>arcking-<u>A</u>ware <u>Brea</u>st <u>C</u>ancer <u>R</u>isk model (TA-BreaCR), a novel framework that integrates local-to-global multiscale longitudinal tissue changes and explicitly models the ordinal relationship of time to BC events, enabling joint prediction of both future BC risk and estimated time to onset. The model is evaluated on two datasets (In-house and EMBED), outperforming existing and state-of-the-art methods in both risk classification and time-to-event prediction tasks. Visualization analysis reveals consistent attention to high-risk regions over time, enhancing interpretability. These results highlight the potential of TA-BreaCR to support individualized BC screening and prevention.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103990"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146209647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CATERPillar: a flexible framework for generating white matter numerical substrates with incorporated glial cells 卡特彼勒:一个灵活的框架,生成白质数值基质与合并胶质细胞
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-17 DOI: 10.1016/j.media.2026.103946
Jasmine Nguyen-Duc , Malte Brammerloh , Melina Cherchali , Inès De Riedmatten , Jean-Baptiste Pérot , Jonathan Rafael-Patiño , Ileana O. Jelescu
{"title":"CATERPillar: a flexible framework for generating white matter numerical substrates with incorporated glial cells","authors":"Jasmine Nguyen-Duc ,&nbsp;Malte Brammerloh ,&nbsp;Melina Cherchali ,&nbsp;Inès De Riedmatten ,&nbsp;Jean-Baptiste Pérot ,&nbsp;Jonathan Rafael-Patiño ,&nbsp;Ileana O. Jelescu","doi":"10.1016/j.media.2026.103946","DOIUrl":"10.1016/j.media.2026.103946","url":null,"abstract":"<div><div>Monte Carlo diffusion simulations in numerical substrates are valuable for exploring the sensitivity and specificity of the diffusion MRI (dMRI) signal to realistic cell microstructure features. A crucial component of such simulations is the use of numerical phantoms that accurately represent the target tissue, which is in this case, cerebral white matter (WM). This study introduces CATERPillar (Computational Axonal Threading Engine for Realistic Proliferation), a novel method that simulates the mechanics of axonal growth using overlapping spheres as elementary units. CATERPillar facilitates parallel axon development while preventing collisions, offering user control over key structural parameters such as cellular density, undulation, beading and myelination. Its uniqueness lies in its ability to generate not only realistic axonal structures but also realistic glial cells, enhancing the biological fidelity of simulations. We showed that our grown substrates feature distributions of key morphological parameters that agree with those from histological studies. The structural realism of the astrocytic components was quantitatively validated using Sholl analysis. Furthermore, the time-dependent diffusion in the extra- and intra-axonal compartments accurately reflected expected characteristics of short-range disorder, as predicted by theoretical models. CATERPillar is open source and can be used to (a) develop new acquisition schemes that sensitise the MRI signal to unique tissue microstructure features, (b) test the accuracy of a broad range of analytical models, and (c) build a set of substrates to train machine learning models on.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103946"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145995244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anatomy-guided prompting with cross-modal self-alignment for whole-body PET-CT breast cancer segmentation 解剖引导提示与跨模态自对准用于全身PET-CT乳腺癌分割
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-22 DOI: 10.1016/j.media.2026.103956
Jiaju Huang , Xiao Yang , Xinglong Liang , Shaobin Chen , Yue Sun , Greta Sp Mok , Shuo Li , Ying Wang , Tao Tan
{"title":"Anatomy-guided prompting with cross-modal self-alignment for whole-body PET-CT breast cancer segmentation","authors":"Jiaju Huang ,&nbsp;Xiao Yang ,&nbsp;Xinglong Liang ,&nbsp;Shaobin Chen ,&nbsp;Yue Sun ,&nbsp;Greta Sp Mok ,&nbsp;Shuo Li ,&nbsp;Ying Wang ,&nbsp;Tao Tan","doi":"10.1016/j.media.2026.103956","DOIUrl":"10.1016/j.media.2026.103956","url":null,"abstract":"<div><div>Accurate segmentation of breast cancer in PET-CT images is crucial for precise staging, monitoring treatment response, and guiding personalized therapy. However, the small size and dispersed nature of metastatic lesions, coupled with the scarcity of annotated data and heterogeneity between modalities that hinders effective information fusion, make this task challenging. This paper proposes a novel anatomy-guided cross-modal learning framework to address these issues. Our approach first generates organ pseudo-labels through a teacher-student learning paradigm, which serve as anatomical prompts to guide cancer segmentation. We then introduce a self-aligning cross-modal pre-training method that aligns PET and CT features in a shared latent space through masked 3D patch reconstruction, enabling effective cross-modal feature fusion. Finally, we initialize the segmentation network’s encoder with the pre-trained encoder weights, and incorporate organ labels through a Mamba-based prompt encoder and Hypernet-Controlled Cross-Attention mechanism for dynamic anatomical feature extraction and fusion. Notably, our method outperforms eight state-of-the-art methods, including CNN-based, transformer-based, and Mamba-based approaches, on two datasets encompassing primary breast cancer, metastatic breast cancer, and other types of cancer segmentation tasks.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103956"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-guided multi-geometric window transformer for cardiac cine MRI reconstruction 知识引导的心脏MRI重建多几何窗口变压器
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-09 DOI: 10.1016/j.media.2026.103936
Jun Lyu , Guangming Wang , Yunqi Wang , Jing Qin , Chengyan Wang
{"title":"Knowledge-guided multi-geometric window transformer for cardiac cine MRI reconstruction","authors":"Jun Lyu ,&nbsp;Guangming Wang ,&nbsp;Yunqi Wang ,&nbsp;Jing Qin ,&nbsp;Chengyan Wang","doi":"10.1016/j.media.2026.103936","DOIUrl":"10.1016/j.media.2026.103936","url":null,"abstract":"<div><div>Magnetic resonance imaging (MRI) plays a crucial role in clinical diagnosis, yet traditional MR image acquisition often requires a prolonged duration, potentially causing patient discomfort and image artifacts. Faster and more accurate image reconstruction may alleviate patient discomfort during MRI examinations and enhance diagnostic accuracy and efficiency. In recent years, significant advancements in deep learning technology offer promise for improving MR image quality and accelerating acquisition. Addressing the demand for cardiac cine MRI reconstruction, we propose KGMgT, a novel MRI reconstruction network based on knowledge-guided approaches. The KGMgT model leverages adaptive spatiotemporal attention mechanisms to infer motion trajectories of adjacent cardiac frames, thereby better extracting complementary information. Additionally, we employ Transformer-driven dynamic feature aggregation to establish long-range dependencies, facilitating global information integration. Research findings demonstrate that the KGMgT model achieves state-of-the-art performance on multiple benchmark datasets, offering an efficient solution for cardiac cine MRI reconstruction. This collaborative approach, combining artificial intelligence technology to assist medical professionals in clinical decision-making, holds promise for significantly improving diagnostic efficiency, optimizing treatment plans, and enhancing the patient treatment experience. The code and trained models are available at <span><span>https://github.com/MICV-Lab/KGMgT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103936"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards boundary confusion for volumetric medical image segmentation 体医学图像分割中边界混淆的研究
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-25 DOI: 10.1016/j.media.2026.103961
Xin You , Ming Ding , Minghui Zhang , Hanxiao Zhang , Junyang Wu , Yi Yu , Jie Yang , Yun Gu
{"title":"Towards boundary confusion for volumetric medical image segmentation","authors":"Xin You ,&nbsp;Ming Ding ,&nbsp;Minghui Zhang ,&nbsp;Hanxiao Zhang ,&nbsp;Junyang Wu ,&nbsp;Yi Yu ,&nbsp;Jie Yang ,&nbsp;Yun Gu","doi":"10.1016/j.media.2026.103961","DOIUrl":"10.1016/j.media.2026.103961","url":null,"abstract":"<div><div>Accurate boundary segmentation of volumetric images is a critical task for image-guided diagnosis and computer-assisted intervention. It is challenging to address the boundary confusion with explicit constraints. Existing methods of refining boundaries overemphasize the slender structure while overlooking the dynamic interactions between boundaries and neighboring regions. In this paper, we reconceptualize the mechanism of boundary generation via introducing Pushing and Pulling interactions, then propose a unified network termed PP-Net to model shape characteristics of the confused boundary region. Specifically, we first propose the semantic difference module (SDM) from the pushing branch to drive the boundary towards the ground truth under diffusion guidance. Additionally, the class clustering module (CCM) from the pulling branch is introduced to stretch the intersected boundary along the opposite direction. Thus, pushing and pulling branches will furnish two adversarial forces to enhance representation capabilities for the faint boundary. Experiments are conducted on four public datasets and one in-house dataset plagued by boundary confusion. The results demonstrate the superiority of PP-Net over other segmentation networks, especially on the evaluation metrics of Hausdorff Distance and Average Symmetric Surface Distance. Besides, SDM and CCM can serve as plug-and-play modules to enhance classic U-shape baseline models, including recent SAM-based foundation models. Source codes are available at <span><span>https://github.com/EndoluminalSurgicalVision-IMR/PnPNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103961"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146048365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书