IEEE Transactions on Medical Imaging最新文献

筛选
英文 中文
SegMamba-V2: Long-range Sequential Modeling Mamba For General 3D Medical Image Segmentation. segamba - v2:用于一般3D医学图像分割的远程顺序建模曼巴。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-07-18 DOI: 10.1109/tmi.2025.3589797
Zhaohu Xing,Tian Ye,Yijun Yang,Du Cai,Baowen Gai,Xiao-Jian Wu,Feng Gao,Lei Zhu
{"title":"SegMamba-V2: Long-range Sequential Modeling Mamba For General 3D Medical Image Segmentation.","authors":"Zhaohu Xing,Tian Ye,Yijun Yang,Du Cai,Baowen Gai,Xiao-Jian Wu,Feng Gao,Lei Zhu","doi":"10.1109/tmi.2025.3589797","DOIUrl":"https://doi.org/10.1109/tmi.2025.3589797","url":null,"abstract":"The Transformer architecture has demonstrated remarkable results in 3D medical image segmentation due to its capability of modeling global relationships. However, it poses a significant computational burden when processing high-dimensional medical images. Mamba, as a State Space Model (SSM), has recently emerged as a notable approach for modeling long-range dependencies in sequential data. Although a substantial amount of Mamba-based research has focused on natural language and 2D image processing, few studies explore the capability of Mamba on 3D medical images. In this paper, we propose SegMamba-V2, a novel 3D medical image segmentation model, to effectively capture long-range dependencies within whole-volume features at each scale. To achieve this goal, we first devise a hierarchical scale downsampling strategy to enhance the receptive field and mitigate information loss during downsampling. Furthermore, we design a novel tri-orientated spatial Mamba block that extends the global dependency modeling process from one plane to three orthogonal planes to improve feature representation capability. Moreover, we collect and annotate a large-scale dataset (named CRC-2000) with fine-grained categories to facilitate benchmarking evaluation in 3D colorectal cancer (CRC) segmentation. We evaluate the effectiveness of our SegMamba-V2 on CRC-2000 and three other large-scale 3D medical image segmentation datasets, covering various modalities, organs, and segmentation targets. Experimental results demonstrate that our Segmamba-V2 outperforms state-of-the-art methods by a significant margin, which indicates the universality and effectiveness of the proposed model on 3D medical image segmentation tasks. The code for SegMamba-V2 is publicly available at: https://github.com/ge-xing/SegMamba-V2.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"18 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Neuron Morphology Representation with Graph Transformer. 基于图转换器的自监督神经元形态表示。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-07-18 DOI: 10.1109/tmi.2025.3590484
Pengpeng Sheng,Gangming Zhao,Tingting Han,Lei Qu
{"title":"Self-Supervised Neuron Morphology Representation with Graph Transformer.","authors":"Pengpeng Sheng,Gangming Zhao,Tingting Han,Lei Qu","doi":"10.1109/tmi.2025.3590484","DOIUrl":"https://doi.org/10.1109/tmi.2025.3590484","url":null,"abstract":"Effective representation of neuronal morphology is essential for cell typing and understanding brain function. However, the complexity of neuronal morphology arises not only in inter-class structural differences but also in intra-class variations across developmental stages and environmental conditions. Such diversity poses significant challenges for existing methods in balancing robustness and discriminative power when representing neuronal morphology. To address this, we propose SGTMorph, a hybrid Graph Transformer framework that leverages the local topological modeling capabilities of graph neural networks and the global relational reasoning strengths of Transformers to explicitly encode neuronal structural information. SGTMorph incorporates a random walk-based positional encoding scheme to facilitate effective information propagation across neuronal graphs and introduces a spatially invariant encoding mechanism to improve adaptability to diverse morphologies. This integrated approach enables a robust and comprehensive representation of neuronal morphology while preserving biological fidelity. To enable label-free feature learning, we devise a self-supervised training strategy grounded in geometric and topological similarity metrics. Extensive experiments on five datasets demonstrate SGTMorph's superior performance in neuron morphology classification and retrieval tasks. Furthermore, its practical utility in neuroscience research is validated by accurate predictions of two functional properties: the laminar distribution of somas and axonal projection patterns. The code is publicly at: https://github.com/big-rain/SGTMorph.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"8 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scaling Chest X-ray Foundation Models from Mixed Supervisions for Dense Prediction. 基于混合监督的胸部x射线基础模型密度预测。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-07-16 DOI: 10.1109/tmi.2025.3589928
Fuying Wang,Lequan Yu
{"title":"Scaling Chest X-ray Foundation Models from Mixed Supervisions for Dense Prediction.","authors":"Fuying Wang,Lequan Yu","doi":"10.1109/tmi.2025.3589928","DOIUrl":"https://doi.org/10.1109/tmi.2025.3589928","url":null,"abstract":"Foundation models have significantly revolutionized the field of chest X-ray diagnosis with their ability to transfer across various diseases and tasks. However, previous works have predominantly utilized self-supervised learning from medical image-text pairs, which falls short in dense medical prediction tasks due to their sole reliance on such coarse pair supervision, thereby limiting their applicability to detailed diagnostics. In this paper, we introduce a Dense Chest X-ray Foundation Model (DCXFM), which utilizes mixed supervision types (i.e., text, label, and segmentation masks) to significantly enhance the scalability of foundation models across various medical tasks. Our model involves two training stages: we first employ a novel self-distilled multimodal pretraining paradigm to exploit text and label supervision, along with local-to-global self-distillation and soft cross-modal contrastive alignment strategies to enhance localization capabilities. Subsequently, we introduce an efficient cost aggregation module, comprising spatial and class aggregation mechanisms, to further advance dense prediction tasks with densely annotated datasets. Comprehensive evaluations on three tasks (phrase grounding, zero-shot semantic segmentation, and zero-shot classification) demonstrate DCXFM's superior performance over other state-of-the-art medical image-text pretraining models. Remarkably, DCXFM exhibits powerful zero-shot capabilities across various datasets in phrase grounding and zero-shot semantic segmentation, underscoring its superior generalization in dense prediction tasks.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"2 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144645754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frenet-Serret Frame-based Decomposition for Part Segmentation of 3D Curvilinear Structures. 基于Frenet-Serret框架的三维曲线结构零件分割。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-07-16 DOI: 10.1109/tmi.2025.3589543
Shixuan Leslie Gu,Jason Ken Adhinarta,Mikhail Bessmeltsev,Jiancheng Yang,Yongjie Jessica Zhang,Wenjie Yin,Daniel Berger,Jeff Lichtman,Hanspeter Pfister,Donglai Wei
{"title":"Frenet-Serret Frame-based Decomposition for Part Segmentation of 3D Curvilinear Structures.","authors":"Shixuan Leslie Gu,Jason Ken Adhinarta,Mikhail Bessmeltsev,Jiancheng Yang,Yongjie Jessica Zhang,Wenjie Yin,Daniel Berger,Jeff Lichtman,Hanspeter Pfister,Donglai Wei","doi":"10.1109/tmi.2025.3589543","DOIUrl":"https://doi.org/10.1109/tmi.2025.3589543","url":null,"abstract":"Accurate segmentation of anatomical substructures within 3D curvilinear structures in medical imaging remains challenging due to their complex geometry and the scarcity of diverse, large-scale datasets for algorithm development and evaluation. In this paper, we use dendritic spine segmentation as a case study and address these challenges by introducing a novel Frenet-Serret Framebased Decomposition, which decomposes 3D curvilinear structures into a globally smooth continuous curve that captures the overall shape, and a cylindrical primitive that encodes local geometric properties. This approach leverages Frenet-Serret Frames and arc length parameterization to preserve essential geometric features while reducing representational complexity, facilitating data-efficient learning, improved segmentation accuracy, and generalization on 3D curvilinear structures. To rigorously evaluate our method, we introduce two datasets: CurviSeg, a synthetic dataset for 3D curvilinear structure segmentation that validates our method's key properties, and DenSpineEM, a benchmark for dendritic spine segmentation, which comprises 4,476 manually annotated spines from 70 dendrites across three public electron microscopy datasets, covering multiple brain regions and species. Our experiments on DenSpineEM demonstrate exceptional cross-region and cross-species generalization: models trained on the mouse somatosensory cortex subset achieve 94.43% Dice, maintaining strong performance in zero-shot segmentation on both mouse visual cortex (95.61% Dice) and human frontal lobe (86.63% Dice) subsets. Moreover, we test the generalizability of our method on the IntrA dataset, where it achieves 77.08% Dice (5.29% higher than prior arts) on intracranial aneurysm segmentation from entire artery models. These findings demonstrate the potential of our approach for accurately analyzing complex curvilinear structures across diverse medical imaging fields. Our dataset, code, and models are available at https://github.com/VCG/FFD4DenSpineEM to support future research.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"134 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144645751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ray-Bundle Based X-ray Representation and Reconstruction: an Alternative to Classic Tomography on Voxelized Volumes. 基于射线束的x射线表示和重建:体素化体上经典断层扫描的替代方法。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-07-16 DOI: 10.1109/tmi.2025.3589946
Yuanwei He,Dan Ruan
{"title":"Ray-Bundle Based X-ray Representation and Reconstruction: an Alternative to Classic Tomography on Voxelized Volumes.","authors":"Yuanwei He,Dan Ruan","doi":"10.1109/tmi.2025.3589946","DOIUrl":"https://doi.org/10.1109/tmi.2025.3589946","url":null,"abstract":"Tomography recovers internal volume from projection measurements. Formulated as inverse problems, classic computed tomography generally reconstructs attenuation property in a preset cartesian grid coordinate. While this is intuitive and convenient for digital display, such discretization leads to forward-backward projection inconsistency, and discrepancy between digital and effective resolution. We take a different perspective by considering the image volume as continuous and modelling forward projection as a hybrid continuous-to-discrete mapping from volume to detector elements, which we call \"ray bundles\". The ray bundle can be regarded as an unconventional heterogenous coordinate. Projections are modeled as line integrations along ray bundles in the continuous volume space and approximated by numerical integration using customized sample points. This modeling approach is conveniently supported with an implicit neural representation approach. By representing the volume as a function mapping spatial coordinates to attenuation properties and leveraging ray bundle projection, this approach reflects transmission physics and eliminates the need for explicit interpolation, intersection calculations, or matrix inversions. A novel sampling strategy is further developed to adaptively distribute points along the ray bundles, emphasizing high gradient regions to allocate computational resources to heterogenous structures and details. We call this system T-ReX to indicate Transmission Ray bundles for X-ray geometry. We validate T-ReX through comprehensive experiments across three scenarios: simulated full-fan projections with primary signal only, half-fan setups with simulated scatter and noise, and an in-house dataset with realistic acquisition conditions. These results highlight the effectiveness of T-ReX in sparse view X-ray tomography.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"11 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144645752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Debiasing Medical Knowledge for Prompting Universal Model in CT Image Segmentation 基于医学知识去偏的CT图像分割通用提示模型
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-07-15 DOI: 10.1109/tmi.2025.3589399
Boxiang Yun, Shitian Zhao, Qingli Li, Alex Kot, Yan Wang
{"title":"Debiasing Medical Knowledge for Prompting Universal Model in CT Image Segmentation","authors":"Boxiang Yun, Shitian Zhao, Qingli Li, Alex Kot, Yan Wang","doi":"10.1109/tmi.2025.3589399","DOIUrl":"https://doi.org/10.1109/tmi.2025.3589399","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"679 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144639749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Polyp Detection and Diagnosis through Compositional Prompt-Guided Diffusion Models 基于成分快速引导扩散模型的鲁棒息肉检测与诊断
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-07-15 DOI: 10.1109/tmi.2025.3589456
Jia Yu, Yan Zhu, Peiyao Fu, Tianyi Chen, Junbo Huang, Quanlin Li, Pinghong Zhou, Zhihua Wang, Fei Wu, Shuo Wang, Xian Yang
{"title":"Robust Polyp Detection and Diagnosis through Compositional Prompt-Guided Diffusion Models","authors":"Jia Yu, Yan Zhu, Peiyao Fu, Tianyi Chen, Junbo Huang, Quanlin Li, Pinghong Zhou, Zhihua Wang, Fei Wu, Shuo Wang, Xian Yang","doi":"10.1109/tmi.2025.3589456","DOIUrl":"https://doi.org/10.1109/tmi.2025.3589456","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"48 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144639753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Segment Anything Model for Source-Free Domain Adaptation via Dual Feature Guided Auto-Prompting 利用分段任意模型通过双特征引导自动提示进行无源域自适应
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-07-15 DOI: 10.1109/tmi.2025.3587733
Zheang Huai, Hui Tang, Yi Li, Zhuangzhuang Chen, Xiaomeng Li
{"title":"Leveraging Segment Anything Model for Source-Free Domain Adaptation via Dual Feature Guided Auto-Prompting","authors":"Zheang Huai, Hui Tang, Yi Li, Zhuangzhuang Chen, Xiaomeng Li","doi":"10.1109/tmi.2025.3587733","DOIUrl":"https://doi.org/10.1109/tmi.2025.3587733","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"679 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144639870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Posterior Distribution Estimation of Kinetic Parameters in Dynamic Brain PET Using Generative Deep Learning Models 基于生成式深度学习模型的动态脑PET动力学参数贝叶斯后验分布估计
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-07-15 DOI: 10.1109/tmi.2025.3588859
Yanis Djebra, Xiaofeng Liu, Thibault Marin, Amal Tiss, Maeva Dhaynaut, Nicolas Guehl, Keith Johnson, Georges El Fakhri, Chao Ma, Jinsong Ouyang
{"title":"Bayesian Posterior Distribution Estimation of Kinetic Parameters in Dynamic Brain PET Using Generative Deep Learning Models","authors":"Yanis Djebra, Xiaofeng Liu, Thibault Marin, Amal Tiss, Maeva Dhaynaut, Nicolas Guehl, Keith Johnson, Georges El Fakhri, Chao Ma, Jinsong Ouyang","doi":"10.1109/tmi.2025.3588859","DOIUrl":"https://doi.org/10.1109/tmi.2025.3588859","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"13 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144639748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-based Shape-Deformation Networks for Artifact-Free Geometry Reconstruction of Lumbar Spine from MR Images 基于注意力的MR图像腰椎无伪影几何重建的形状变形网络
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-07-15 DOI: 10.1109/tmi.2025.3588831
Linchen Qian, Jiasong Chen, Linhai Ma, Timur Urakov, Weiyong Gu, Liang Liang
{"title":"Attention-based Shape-Deformation Networks for Artifact-Free Geometry Reconstruction of Lumbar Spine from MR Images","authors":"Linchen Qian, Jiasong Chen, Linhai Ma, Timur Urakov, Weiyong Gu, Liang Liang","doi":"10.1109/tmi.2025.3588831","DOIUrl":"https://doi.org/10.1109/tmi.2025.3588831","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"23 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144639750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信