IEEE Transactions on Medical Imaging最新文献

筛选
英文 中文
Source-Free Active Domain Adaptation via Influential-Points-Guided Progressive Teacher for Medical Image Segmentation. 基于影响点引导的渐进式教师无源主动域自适应医学图像分割。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-10-09 DOI: 10.1109/tmi.2025.3619837
Yong Chen,Xiangde Luo,Renyi Chen,Yiyue Li,Han Zhang,He Lyu,Huan Song,Kang Li
{"title":"Source-Free Active Domain Adaptation via Influential-Points-Guided Progressive Teacher for Medical Image Segmentation.","authors":"Yong Chen,Xiangde Luo,Renyi Chen,Yiyue Li,Han Zhang,He Lyu,Huan Song,Kang Li","doi":"10.1109/tmi.2025.3619837","DOIUrl":"https://doi.org/10.1109/tmi.2025.3619837","url":null,"abstract":"Domain adaptation in medical image segmentation enables pre-trained models to generalize to new target domains. Given limited annotated data and privacy constraints, Source-Free Active Domain Adaptation (SFADA) methods provide promising solutions by selecting a few target samples for labeling without accessing source samples. However, in a fully source-free setting, existing works have not fully explored how to select these target samples in a class-balanced manner and how to conduct robust model adaptation using both labeled and unlabeled samples. In this study, we discover that boundary samples with source-like semantics but sharp predictive discrepancies are beneficial for SFADA. We define these samples as the most influential points and propose a slice-wise framework using influential points learning to explore them. Specifically, we detect source-like samples to retain source-specific knowledge. For each target sample, an adaptive K-nearest neighbor algorithm based on local density is introduced to construct neighborhoods of source-like samples for knowledge transfer. We then propose a class-balanced Kullback-Leibler divergence for these neighborhoods, calculating it to obtain an influential score ranking. A diverse subset of the highest-ranked target samples (considered influential points) is manually annotated. Furthermore, we design a progressive teacher model to facilitate SFADA for medical image segmentation. Guided by influential points, this model independently generates and utilizes pseudo-labels to mitigate error accumulation. To further suppress noise, curriculum learning is incorporated into the model to progressively leverage reliable supervision signals from pseudo-labels. Experiments on multiple benchmarks demonstrate that our method outperforms state-of-the-art methods even with only 2.5% of the labeling budget.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"126 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MACE Risk Prediction in ARVC Patients via CMR: A Three-Tier Spatiotemporal Transformer with Pericardial Adipose Tissue Embedding. 通过CMR预测ARVC患者的MACE风险:心包脂肪组织埋置的三层时空转换器。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-10-07 DOI: 10.1109/tmi.2025.3618711
Xiaoyu Wang,Jinyu Zheng,Chaolu Feng,Lian-Ming Wu
{"title":"MACE Risk Prediction in ARVC Patients via CMR: A Three-Tier Spatiotemporal Transformer with Pericardial Adipose Tissue Embedding.","authors":"Xiaoyu Wang,Jinyu Zheng,Chaolu Feng,Lian-Ming Wu","doi":"10.1109/tmi.2025.3618711","DOIUrl":"https://doi.org/10.1109/tmi.2025.3618711","url":null,"abstract":"Major adverse cardiac events (MACE) pose a high life-threatening risk to patients with arrhythmogenic right ventricular cardiomyopathy (ARVC). Cardiac magnetic resonance (CMR) has been proven to reflect the risk of MACE, but two challenges remain: limited dataset size due to the rarity of ARVC and overlapping image distributions between non-MACE and MACE patients. To address these challenges by fully leveraging the dynamic and spatial information in the limited CMR dataset, a deep learning-based risk prediction model named Three-Tier Spatiotemporal Transformer (TTST) is proposed in this paper, which utilizes three transformer-based tiers to sequentially extract and fuse features from three domains: the 2D spatial domain of each slice, the temporal dimension of slice sequence and the inter-slice depth dimension. In TTST, a pericardial adipose tissue (PAT) embedding unit is proposed to incorporate the dynamic and positional information of PAT, a key biomarker for distinguishing MACE from non-MACE based on its thickening and reduced motion, as prior knowledge to reduce reliance on large-scale datasets. Additionally, a patch voting unit is introduced to pick out local features that highlight more indicative regions in the heart, guided by the PAT embedding information. Experimental results demonstrate that TTST outperforms existing classification methods in MACE prediction (internal: AUC = 0.89, ACC = 84.02%; external: AUC = 0.87, ACC = 86.21%). Clinically, TTST achieves effective risk prediction performance either independently (C-index = 0.744) or in combination with the existing 5-year risk score model (increasing C-index from 0.686 to 0.777). Code and dataset are accessible at https://github.com/DFLAG-NEU.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"108 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145241078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-guided Variational Method for Fractional Flow Reserve Based on Coronary Angiography. 基于冠状动脉造影的分数血流储备的物理引导变分方法。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-10-07 DOI: 10.1109/tmi.2025.3618679
Qi Zhang,Heye Zhang,Zhifan Gao,Baihong Xie,Zhihui Zhang,Dan Deng,Changnong Peng,Xiaoqing Wang,Xiujian Liu
{"title":"Physics-guided Variational Method for Fractional Flow Reserve Based on Coronary Angiography.","authors":"Qi Zhang,Heye Zhang,Zhifan Gao,Baihong Xie,Zhihui Zhang,Dan Deng,Changnong Peng,Xiaoqing Wang,Xiujian Liu","doi":"10.1109/tmi.2025.3618679","DOIUrl":"https://doi.org/10.1109/tmi.2025.3618679","url":null,"abstract":"As a leading global cause of mortality, coronary ischemia requires accurate diagnostics for effective management. The combining coronary angiography with fractional flow reserve (FFR) offers structural and functional assessment of coronary stenosis to guide revascularization. However, traditional FFR measurements are invasive, requiring pressure wire placement. Image-based FFR estimation methods integrate vascular morphology with biomechanics but face challenges in modelling the complex fluid-structure interaction (FSI) of coronary flow and vessel walls. Therefore, we propose a physics-guided variational domain progressing method (PVDPM) for non-invasive FFR estimation through FSI system. PVDPM employs the principle of virtual work to model FSI system. This approach can improve the modelling of interdependent physical processes, enabling accurate FFR estimation based on coronary angiography-derived vascular morphology. The PVDPM demonstrates 91% accuracy in clinical datasets and offers solution for diagnosing coronary ischemia based on coronary angiography.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"27 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Q-space Guided Multi-Modal Translation Network for Diffusion-Weighted Image Synthesis. 扩散加权图像合成的q空间引导多模态平移网络。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-10-07 DOI: 10.1109/tmi.2025.3618683
Pengli Zhu,Yingji Fu,Nanguang Chen,Anqi Qiu
{"title":"Q-space Guided Multi-Modal Translation Network for Diffusion-Weighted Image Synthesis.","authors":"Pengli Zhu,Yingji Fu,Nanguang Chen,Anqi Qiu","doi":"10.1109/tmi.2025.3618683","DOIUrl":"https://doi.org/10.1109/tmi.2025.3618683","url":null,"abstract":"Diffusion-weighted imaging (DWI) enables non-invasive characterization of tissue microstructure, yet acquiring densely sampled q-space data remains time-consuming and impractical in many clinical settings. Existing deep learning methods are typically constrained by fixed q-space sampling, limiting their adaptability to variable sampling scenarios. In this paper, we propose a Q-space Guided Multi-Modal Translation Network (Q-MMTN) for synthesizing multi-shell, high-angular resolution DWI (MS-HARDI) from flexible q-space sampling, leveraging commonly acquired structural data (e.g., T1- and T2-weighted MRI). Q-MMTN integrates the hybrid encoder and multi-modal attention fusion mechanism to effectively extract both local and global complementary information from multiple modalities. This design enhances feature representation and, together with a flexible q-space-aware embedding, enables dynamic modulation of internal features without relying on fixed sampling schemes. Additionally, we introduce a set of task-specific constraints, including adversarial, reconstruction, and anatomical consistency losses, which jointly enforce anatomical fidelity and signal realism. These constraints guide Q-MMTN to accurately capture the intrinsic and nonlinear relationships between directional DWI signals and q-space information. Extensive experiments across four lifespan datasets of children, adolescents, young and older adults demonstrate that Q-MMTN outperforms existing methods, including 1D-qDL, 2D-qDL, MESC-SD, and Q-GAN in estimating parameter maps and fiber tracts with fine-grained anatomical details. Notably, its ability to accommodate flexible q-space sampling highlights its potential as a promising toolkit for clinical and research applications. Our code is available at https://github.com/Idea89560041/Q-MMTN.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"33 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open-set Active Learning for Nucleus Detection From the Histopathological Images 基于开放集主动学习的组织病理图像核检测
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-10-06 DOI: 10.1109/tmi.2025.3617073
Jiao Tang, Yagao Yue, Wei Chu, Mingliang Wang, Yulin Wang, Peng Wan, Andrey Krylov, Wei Shao, Daoqiang Zhang
{"title":"Open-set Active Learning for Nucleus Detection From the Histopathological Images","authors":"Jiao Tang, Yagao Yue, Wei Chu, Mingliang Wang, Yulin Wang, Peng Wan, Andrey Krylov, Wei Shao, Daoqiang Zhang","doi":"10.1109/tmi.2025.3617073","DOIUrl":"https://doi.org/10.1109/tmi.2025.3617073","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"107 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145235647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Static Knowledge: Dynamic Context-Aware Cross-Modal Contrastive Learning for Medical Visual Question Answering 超越静态知识:医学视觉问答的动态上下文感知跨模态对比学习
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-10-06 DOI: 10.1109/tmi.2025.3617289
Rui Yang, Lijun Liu, Xupeng Feng, Wei Peng, Xiaobing Yang
{"title":"Beyond Static Knowledge: Dynamic Context-Aware Cross-Modal Contrastive Learning for Medical Visual Question Answering","authors":"Rui Yang, Lijun Liu, Xupeng Feng, Wei Peng, Xiaobing Yang","doi":"10.1109/tmi.2025.3617289","DOIUrl":"https://doi.org/10.1109/tmi.2025.3617289","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"348 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145235628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LeqMod: Adaptable Lesion-Quantification-Consistent Modulation for Deep Learning Low-Count PET Image Denoising LeqMod:深度学习低计数PET图像去噪的自适应病变量化一致调制
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-10-06 DOI: 10.1109/tmi.2025.3618247
Menghua Xia, Huidong Xie, Qiong Liu, Bo Zhou, Hanzhong Wang, Biao Li, Axel Rominger, Quanzheng Li, Ramsey D. Badawi, Kuangyu Shi, Georges El Fakhri, Chi Liu
{"title":"LeqMod: Adaptable Lesion-Quantification-Consistent Modulation for Deep Learning Low-Count PET Image Denoising","authors":"Menghua Xia, Huidong Xie, Qiong Liu, Bo Zhou, Hanzhong Wang, Biao Li, Axel Rominger, Quanzheng Li, Ramsey D. Badawi, Kuangyu Shi, Georges El Fakhri, Chi Liu","doi":"10.1109/tmi.2025.3618247","DOIUrl":"https://doi.org/10.1109/tmi.2025.3618247","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"106 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145235649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ProtoMTG: Prototypical Multi-Task Learning for the Generation of Multiple Stained Immunohistochemical Images ProtoMTG:用于生成多重染色免疫组织化学图像的典型多任务学习
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-10-06 DOI: 10.1109/tmi.2025.3618446
Junjie Zhou, Andrey Krylov, Jianpeng Sheng, Qi Zhu, Wei Shao, Daoqiang Zhang
{"title":"ProtoMTG: Prototypical Multi-Task Learning for the Generation of Multiple Stained Immunohistochemical Images","authors":"Junjie Zhou, Andrey Krylov, Jianpeng Sheng, Qi Zhu, Wei Shao, Daoqiang Zhang","doi":"10.1109/tmi.2025.3618446","DOIUrl":"https://doi.org/10.1109/tmi.2025.3618446","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"21 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145235648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SMART: Self-supervised Learning for Metal Artifact Reduction in Computed Tomography Using Range Null Space Decomposition SMART:使用范围零空间分解在计算机断层扫描中减少金属伪影的自监督学习
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-10-06 DOI: 10.1109/tmi.2025.3616003
Tao Wang, Yanxin Cao, Zexin Lu, Yongqiang Huang, Jingfeng Lu, Fenglei Fan, Hongming Shan, Yi Zhang
{"title":"SMART: Self-supervised Learning for Metal Artifact Reduction in Computed Tomography Using Range Null Space Decomposition","authors":"Tao Wang, Yanxin Cao, Zexin Lu, Yongqiang Huang, Jingfeng Lu, Fenglei Fan, Hongming Shan, Yi Zhang","doi":"10.1109/tmi.2025.3616003","DOIUrl":"https://doi.org/10.1109/tmi.2025.3616003","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"53 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145235651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Gaze to Insight: Bridging Human Visual Attention and Vision Language Model Explanation for Weakly-Supervised Medical Image Segmentation 从凝视到洞察:连接人类视觉注意和视觉语言模型解释弱监督医学图像分割
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-10-06 DOI: 10.1109/tmi.2025.3616598
Jingkun Chen, Haoran Duan, Xiao Zhang, Boyan Gao, Vicente Grau, Jungong Han
{"title":"From Gaze to Insight: Bridging Human Visual Attention and Vision Language Model Explanation for Weakly-Supervised Medical Image Segmentation","authors":"Jingkun Chen, Haoran Duan, Xiao Zhang, Boyan Gao, Vicente Grau, Jungong Han","doi":"10.1109/tmi.2025.3616598","DOIUrl":"https://doi.org/10.1109/tmi.2025.3616598","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"27 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145235627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信