IEEE transactions on medical imaging最新文献

筛选
英文 中文
Correntropy-Based Improper Likelihood Model for Robust Electrophysiological Source Imaging 鲁棒电生理源成像中基于相关熵的不当似然模型
IEEE transactions on medical imaging Pub Date : 2025-04-03 DOI: 10.1109/TMI.2025.3557528
Yuanhao Li;Badong Chen;Zhongxu Hu;Keita Suzuki;Wenjun Bai;Yasuharu Koike;Okito Yamashita
{"title":"Correntropy-Based Improper Likelihood Model for Robust Electrophysiological Source Imaging","authors":"Yuanhao Li;Badong Chen;Zhongxu Hu;Keita Suzuki;Wenjun Bai;Yasuharu Koike;Okito Yamashita","doi":"10.1109/TMI.2025.3557528","DOIUrl":"10.1109/TMI.2025.3557528","url":null,"abstract":"Bayesian learning provides a unified skeleton to solve the electrophysiological source imaging task. From this perspective, existing source imaging algorithms utilize the Gaussian assumption for the observation noise to build the likelihood function for Bayesian inference. However, the electromagnetic measurements of brain activity are usually affected by miscellaneous artifacts, leading to a potentially non-Gaussian distribution for the observation noise. Hence the conventional Gaussian likelihood model is a suboptimal choice for the real-world source imaging task. In this study, we aim to solve this problem by proposing a new likelihood model which is robust with respect to non-Gaussian noises. Motivated by the robust maximum correntropy criterion, we propose a new improper distribution model concerning the noise assumption. This new noise distribution is leveraged to structure a robust likelihood function and integrated with hierarchical prior distributions to estimate source activities by variational inference. In particular, the score matching is adopted to determine the hyperparameters for the improper likelihood model. A comprehensive performance evaluation is performed to compare the proposed noise assumption to the conventional Gaussian model. Simulation results show that, the proposed method can realize more precise source reconstruction by designing known ground-truth. The real-world dataset also demonstrates the superiority of our new method with the visual perception task. This study provides a new backbone for Bayesian source imaging, which would facilitate its application using real-world noisy brain signal.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"3076-3088"},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Better Cephalometric Landmark Detection With Diffusion Data Generation 利用弥散数据生成实现更好的头颅测量地标检测
IEEE transactions on medical imaging Pub Date : 2025-04-03 DOI: 10.1109/TMI.2025.3557430
Dongqian Guo;Wencheng Han;Pang Lyu;Yuxi Zhou;Jianbing Shen
{"title":"Towards Better Cephalometric Landmark Detection With Diffusion Data Generation","authors":"Dongqian Guo;Wencheng Han;Pang Lyu;Yuxi Zhou;Jianbing Shen","doi":"10.1109/TMI.2025.3557430","DOIUrl":"10.1109/TMI.2025.3557430","url":null,"abstract":"Cephalometric landmark detection is essential for orthodontic diagnostics and treatment planning. Nevertheless, the scarcity of samples in data collection and the extensive effort required for manual annotation have significantly impeded the availability of diverse datasets. This limitation has restricted the effectiveness of deep learning-based detection methods, particularly those based on large-scale vision models. To address these challenges, we have developed an innovative data generation method capable of producing diverse cephalometric X-ray images along with corresponding annotations without human intervention. To achieve this, our approach initiates by constructing new cephalometric landmark annotations using anatomical priors. Then, we employ a diffusion-based generator to create realistic X-ray images that correspond closely with these annotations. To achieve precise control in producing samples with different attributes, we introduce a novel prompt cephalometric X-ray image dataset. This dataset includes real cephalometric X-ray images and detailed medical text prompts describing the images. By leveraging these detailed prompts, our method improves the generation process to control different styles and attributes. Facilitated by the large, diverse generated data, we introduce large-scale vision detection models into the cephalometric landmark detection task to improve accuracy. Experimental results demonstrate that training with the generated data substantially enhances the performance. Compared to methods without using the generated data, our approach improves the Success Detection Rate (SDR) by 6.5%, attaining a notable 82.2%. All code and data are available at: <uri>https://um-lab.github.io/cepha-generation/</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2784-2794"},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Balancing Multi-Target Semi-Supervised Medical Image Segmentation With Collaborative Generalist and Specialists 多目标半监督医学图像分割与多面手和专家协作的平衡
IEEE transactions on medical imaging Pub Date : 2025-04-03 DOI: 10.1109/TMI.2025.3557537
You Wang;Zekun Li;Lei Qi;Qian Yu;Yinghuan Shi;Yang Gao
{"title":"Balancing Multi-Target Semi-Supervised Medical Image Segmentation With Collaborative Generalist and Specialists","authors":"You Wang;Zekun Li;Lei Qi;Qian Yu;Yinghuan Shi;Yang Gao","doi":"10.1109/TMI.2025.3557537","DOIUrl":"10.1109/TMI.2025.3557537","url":null,"abstract":"Despite the promising performance achieved by current semi-supervised models in segmenting individual medical targets, many of these models suffer a notable decrease in performance when tasked with the simultaneous segmentation of multiple targets. A vital factor could be attributed to the imbalanced scales among different targets: during simultaneously segmenting multiple targets, large targets dominate the loss, leading to small targets being misclassified as larger ones. To this end, we propose a novel method, which consists of a Collaborative Generalist and several Specialists, termed CGS. It is centered around the idea of employing a specialist for each target class, thus avoiding the dominance of larger targets. The generalist performs conventional multi-target segmentation, while each specialist is dedicated to distinguishing a specific target class from the remaining target classes and the background. Based on a theoretical insight, we demonstrate that CGS can achieve a more balanced training. Moreover, we develop cross-consistency losses to foster collaborative learning between the generalist and the specialists. Lastly, regarding their intrinsic relation that the target class of any specialized head should belong to the remaining classes of the other heads, we introduce an inter-head error detection module to further enhance the quality of pseudo-labels. Experimental results on three popular benchmarks showcase its superior performance compared to state-of-the-art methods. Our code is available at <monospace><uri>https://github.com/wangyou0804/CGS</uri></monospace>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"3025-3037"},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mitigating Data Consistency Induced Discrepancy in Cascaded Diffusion Models for Sparse-View CT Reconstruction 稀疏视图CT重建级联扩散模型中数据一致性诱导差异的缓解。
IEEE transactions on medical imaging Pub Date : 2025-04-02 DOI: 10.1109/TMI.2025.3557243
Hanyu Chen;Zhixiu Hao;Lin Guo;Liying Xiao
{"title":"Mitigating Data Consistency Induced Discrepancy in Cascaded Diffusion Models for Sparse-View CT Reconstruction","authors":"Hanyu Chen;Zhixiu Hao;Lin Guo;Liying Xiao","doi":"10.1109/TMI.2025.3557243","DOIUrl":"10.1109/TMI.2025.3557243","url":null,"abstract":"Sparse-view Computed Tomography (CT) image reconstruction is a promising approach to reduce radiation exposure, but it inevitably leads to image degradation. Although diffusion model-based approaches are computationally expensive and suffer from the training-sampling discrepancy, they provide a potential solution to the problem. This study introduces a novel Cascaded Diffusion with Discrepancy Mitigation (CDDM) framework, including the low-quality image generation in latent space and the high-quality image generation in pixel space which contains data consistency and discrepancy mitigation in a one-step reconstruction process. The cascaded framework minimizes computational costs by replacing some inference steps from pixel to latent space. The discrepancy mitigation technique addresses the training-sampling gap induced by data consistency, ensuring the data distribution is close to the original diffusion manifold. A specialized Alternating Direction Method of Multipliers (ADMM) is employed to process image gradients in separate directions, offering a more targeted approach to regularization. Experimental results across several datasets demonstrate CDDM’s superior performance in high-quality image generation with clearer boundaries compared to existing methods, highlighting the framework’s computational efficiency.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"3012-3024"},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boundary-Guided Contrastive Learning for Semi-Supervised Medical Image Segmentation 半监督医学图像分割的边界引导对比学习
IEEE transactions on medical imaging Pub Date : 2025-04-01 DOI: 10.1109/TMI.2025.3556482
Yang Yang;Jiaxin Zhuang;Guoying Sun;Ruixuan Wang;Jingyong Su
{"title":"Boundary-Guided Contrastive Learning for Semi-Supervised Medical Image Segmentation","authors":"Yang Yang;Jiaxin Zhuang;Guoying Sun;Ruixuan Wang;Jingyong Su","doi":"10.1109/TMI.2025.3556482","DOIUrl":"10.1109/TMI.2025.3556482","url":null,"abstract":"Semi-supervised learning methods, compared to fully supervised learning, offer significant potential to alleviate the burden of manual annotations on clinicians. By leveraging unlabeled data, these methods can aid in the development of medical image segmentation systems for improving efficiency. Boundary segmentation is crucial in medical image analysis. However, accurate segmentation of boundary regions is under-explored in existing methods since boundary pixels constitute only a small fraction of the overall image, resulting in suboptimal segmentation performance for boundary regions. In this paper, we introduce boundary-guided contrastive learning for semi-supervised medical image segmentation (BoCLIS). Specifically, we first propose conservative-to-radical teacher networks with an uncertainty-weighted aggregation strategy to generate higher quality pseudo-labels, enabling more efficient utilization of unlabeled data. To further improve the performance of segmentation in boundary regions, we propose a boundary-guided patch sampling strategy to guide the framework in learning discriminative representations for these regions. Lastly, the patch-based contrastive learning is proposed to simultaneously compute the (dis)similarities of the discriminative representations across intra- and inter-images. Extensive experiments on three public datasets show that our method consistently outperforms existing methods, especially in the boundary region, with DSC improvements of 20.47%, 16.75%, and 17.18%, respectively. A comprehensive analysis is further performed to demonstrate the effectiveness of our approach. Our code is released publicly at <uri>https://github.com/youngyzzZ/BoCLIS</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2973-2988"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143757795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CBCT Reconstruction Using Single X-Ray Projection With Cycle-Domain Geometry-Integrated Denoising Diffusion Probabilistic Models 基于循环域几何积分去噪扩散概率模型的单x射线投影CBCT重建
IEEE transactions on medical imaging Pub Date : 2025-04-01 DOI: 10.1109/TMI.2025.3556402
Shaoyan Pan;Junbo Peng;Yuan Gao;Shao-Yuan Lo;Tianyu Luan;Junyuan Li;Tonghe Wang;Chih-Wei Chang;Zhen Tian;Xiaofeng Yang
{"title":"CBCT Reconstruction Using Single X-Ray Projection With Cycle-Domain Geometry-Integrated Denoising Diffusion Probabilistic Models","authors":"Shaoyan Pan;Junbo Peng;Yuan Gao;Shao-Yuan Lo;Tianyu Luan;Junyuan Li;Tonghe Wang;Chih-Wei Chang;Zhen Tian;Xiaofeng Yang","doi":"10.1109/TMI.2025.3556402","DOIUrl":"10.1109/TMI.2025.3556402","url":null,"abstract":"In the sphere of Cone Beam Computed Tomography (CBCT), acquiring X-ray projections from sufficient angles is indispensable for traditional image reconstruction methods to accurately reconstruct 3D anatomical intricacies. However, this acquisition procedure for the linear accelerator-mounted CBCT systems in radiotherapy takes approximately one minute, impeding its use for ultra-fast intra-fractional motion monitoring during treatment delivery. To address this challenge, we introduce the Patient-specific Cycle-domain Geometric-integrated Denoising Diffusion Probabilistic Model (CG-DDPM). This model aims to leverage patient-specific priors from patient’s CT/4DCT images, which are acquired for treatment planning purposes, to reconstruct 3D CBCT from a single-view 2D CBCT projection of any arbitrary angle during treatment, namely single-view reconstructed CBCT (svCBCT). The CG-DDPM framework encompasses a dual DDPM structure: the Projection-DDPM for synthesizing comprehensive full-view projections and the CBCT-DDPM for creating CBCT images. A key innovation is our Cycle-Domain Geometry-Integrated (CDGI) method, incorporating a Cone Beam X-ray Geometric Transformation Module (GTM) to ensure precise, synergistic operation between the dual DDPMs, thereby enhancing reconstruction accuracy and reducing artifacts. Evaluated in a study involving 37 lung cancer patients, the method demonstrated its ability to reconstruct CBCT not only from simulated X-ray projections but also from real-world data. The CG-DDPM significantly outperforms existing V-shape convolutional neural networks (V-nets), Generative Adversarial Networks (GANs), and DDPM methods in terms of reconstruction fidelity and artifact minimization. This was confirmed through extensive voxel-level, structural, visual, and clinical assessments. The capability of CG-DDPM to generate high-quality reconstructed CBCT from a single-view projection at any arbitrary angle using a single model opens the door for ultra-fast, in-treatment volumetric imaging. This is especially beneficial for radiotherapy at motion-associated cancer sites and image-guided interventional procedures.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2933-2947"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143757941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Supervised Knee Cartilage Segmentation With Successive Eigen Noise-Assisted Mean Teacher Knowledge Distillation 基于连续特征噪声辅助平均教师知识精馏法的半监督膝关节软骨分割
IEEE transactions on medical imaging Pub Date : 2025-04-01 DOI: 10.1109/TMI.2025.3556870
Sheheryar Khan;Muhammad Ammar Khawer;Rizwan Qureshi;Mehmood Nawaz;Muhammad Asim;Weitian Chen;Hong Yan
{"title":"Semi-Supervised Knee Cartilage Segmentation With Successive Eigen Noise-Assisted Mean Teacher Knowledge Distillation","authors":"Sheheryar Khan;Muhammad Ammar Khawer;Rizwan Qureshi;Mehmood Nawaz;Muhammad Asim;Weitian Chen;Hong Yan","doi":"10.1109/TMI.2025.3556870","DOIUrl":"10.1109/TMI.2025.3556870","url":null,"abstract":"Knee cartilage segmentation for Knee Osteoarthritis (OA) diagnosis is challenging due to domain shifts from varying MRI scanning technologies. Existing cross-modality approaches often use paired order matching or style translation techniques to align features. Still, these methods can sacrifice discrimination in less prominent cartilages and overlook critical higher-order correlations and semantic information. To address this issue, we propose a novel framework called Successive Eigen Noise-assisted Mean Teacher Knowledge Distillation (SEN-MTKD) for adapting 2D knee MRI images across different modalities using partially labeled data. Our approach includes the Eigen Low-rank Subspace (ELRS) module, which employs low-rank approximations to generate meaningful pseudo-labels from domain-invariant feature representations progressively. Complementing this, the Successive Eigen Noise (SEN) module introduces advanced data perturbation to enhance discrimination and diversity in small cartilage classes. Additionally, we propose a subspace-based feature distillation loss mechanism (LRBD) to manage variance and leverage rich intermediate representations within the teacher model, ensuring robust feature representation and labeling. Our framework identifies a mutual cross-domain subspace using higher-order structures and lower energy latent features, providing reliable supervision for the student model. Extensive experiments on public and private datasets demonstrate the effectiveness of our method over state-of-the-art benchmarks. The code is available at github.com/AmmarKhawer/SEN-MTKD.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"3051-3063"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143757767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segment Together: A Versatile Paradigm for Semi-Supervised Medical Image Segmentation 一起分割:半监督医学图像分割的通用范例
IEEE transactions on medical imaging Pub Date : 2025-03-31 DOI: 10.1109/TMI.2025.3556310
Qingjie Zeng;Yutong Xie;Zilin Lu;Mengkang Lu;Yicheng Wu;Yong Xia
{"title":"Segment Together: A Versatile Paradigm for Semi-Supervised Medical Image Segmentation","authors":"Qingjie Zeng;Yutong Xie;Zilin Lu;Mengkang Lu;Yicheng Wu;Yong Xia","doi":"10.1109/TMI.2025.3556310","DOIUrl":"10.1109/TMI.2025.3556310","url":null,"abstract":"The scarcity of annotations has become a significant obstacle in training powerful deep-learning models for medical image segmentation, limiting their clinical application. To overcome this, semi-supervised learning that leverages abundant unlabeled data is highly desirable to enhance model training. However, most existing works still focus on specific medical tasks and underestimate the potential of learning across diverse tasks and datasets. In this paper, we propose a Versatile Semi-supervised framework (VerSemi) to present a new perspective that integrates various SSL tasks into a unified model with an extensive label space, exploiting more unlabeled data for semi-supervised medical image segmentation. Specifically, we introduce a dynamic task-prompted design to segment various targets from different datasets. Next, this unified model is used to identify the foreground regions from all labeled data, capturing cross-dataset semantics. Particularly, we create a synthetic task with a CutMix strategy to augment foreground targets within the expanded label space. To effectively utilize unlabeled data, we introduce a consistency constraint that aligns aggregated predictions from various tasks with those from the synthetic task, further guiding the model to accurately segment foreground regions during training. We evaluated our VerSemi framework against seven established SSL methods on four public benchmarking datasets. Our results suggest that VerSemi consistently outperforms all competing methods, beating the second-best method with a 2.69% average Dice gain on four datasets and setting a new state of the art for semi-supervised medical image segmentation. Code is available at <uri>https://github.com/maxwell0027/VerSemi</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2948-2959"},"PeriodicalIF":0.0,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143745005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PHNet: A Pulmonary Hypertension Detection Network Based on Cine Cardiac Magnetic Resonance Images Using a Hybrid Strategy of Adaptive Triplet and Binary Cross-Entropy Losses PHNet:基于电影心脏磁共振图像的肺动脉高压检测网络,采用自适应三重态和二值交叉熵损失的混合策略
IEEE transactions on medical imaging Pub Date : 2025-03-31 DOI: 10.1109/TMI.2025.3555621
Xinchen Yuan;Xiaojuan Guo;Yande Luo;Xiuhong Guan;Qi Li;Zhiquan Situ;Zijie Zhou;Xin Huang;Zhaowei Rong;Yunhai Lin;Mingxi Liu;Juanni Gong;Hongyan Liu;Qi Yang;Xinchun Li;Rongli Zhang;Chengwang Lei;Shumao Pang;Guoxi Xie
{"title":"PHNet: A Pulmonary Hypertension Detection Network Based on Cine Cardiac Magnetic Resonance Images Using a Hybrid Strategy of Adaptive Triplet and Binary Cross-Entropy Losses","authors":"Xinchen Yuan;Xiaojuan Guo;Yande Luo;Xiuhong Guan;Qi Li;Zhiquan Situ;Zijie Zhou;Xin Huang;Zhaowei Rong;Yunhai Lin;Mingxi Liu;Juanni Gong;Hongyan Liu;Qi Yang;Xinchun Li;Rongli Zhang;Chengwang Lei;Shumao Pang;Guoxi Xie","doi":"10.1109/TMI.2025.3555621","DOIUrl":"10.1109/TMI.2025.3555621","url":null,"abstract":"Pulmonary hypertension (PH) is a fatal pulmonary vascular disease. The standard diagnosis of PH heavily relies on an invasive technique, i.e., right heart catheterization, which leads to a delay in diagnosis and serious consequences. Noninvasive approaches are crucial for detecting PH as early as possible; however, it remains a challenge, especially in detecting mild PH patients. To address this issue, we present a new fully automated framework, hereinafter referred to as PHNet, for noninvasively detecting PH patients, especially improving the detection accuracy of mild PH patients, based on cine cardiac magnetic resonance (CMR) images. The PHNet framework employs a hybrid strategy of adaptive triplet and binary cross-entropy losses (HSATBCL) to enhance discriminative feature learning for classifying PH and non-PH. Triplet pairs in HSATBCL are created using a semi-hard negative mining strategy which maintains the stability of the training process. Experiments show that the detection error rate of PHNet for mild PH is reduced by 24.5% on average compared to state-of-the-art PH detection models. The hybrid strategy can effectively improve the model’s ability to detect PH, making PHNet achieve an average area under the curve (AUC) of 0.964, an accuracy of 0.912, and an F1-score of 0.884 in the internal validation dataset. In the external testing dataset, PHNet achieves an average AUC value of 0.828. Thus, PHNet has great potential for noninvasively detecting PH based on cine CMR images in clinical practice. Future research could explore more clinical information and refine feature extraction to further enhance the network performance.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2960-2972"},"PeriodicalIF":0.0,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143745007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MR Spatiospectral Reconstruction Integrating Subspace Modeling and Self-Supervised Spatiotemporal Denoising 融合子空间建模和自监督时空去噪的MR空间谱重构
IEEE transactions on medical imaging Pub Date : 2025-03-28 DOI: 10.1109/TMI.2025.3555928
Ruiyang Zhao;Zepeng Wang;Aaron Anderson;Graham Huesmann;Fan Lam
{"title":"MR Spatiospectral Reconstruction Integrating Subspace Modeling and Self-Supervised Spatiotemporal Denoising","authors":"Ruiyang Zhao;Zepeng Wang;Aaron Anderson;Graham Huesmann;Fan Lam","doi":"10.1109/TMI.2025.3555928","DOIUrl":"10.1109/TMI.2025.3555928","url":null,"abstract":"We present a new method that integrates subspace modeling and a pre-learned spatiotemporal denoiser for reconstruction from highly noisy magnetic resonance spectroscopic imaging (MRSI) data. The subspace model imposes an explicit low-dimensional representation of the high-dimensional spatiospectral functions of interest for noise reduction, while the denoiser serves as a complementary spatiotemporal prior to constrain the subspace reconstruction. A self-supervised learning strategy was proposed to train a denoiser that can distinguish the spatiotemporally correlated signals from uncorrelated noise. An iterative reconstruction formalism was developed based on the Plug-and-Play (PnP)-ADMM framework to synergize the subspace constraint, plug-in denoiser and spatiospectral encoding model. We evaluated the proposed method using numerical simulations and in vivo data, demonstrating improved performance over state-of-the-art subspace-based methods. We also provided theoretical analysis on the utility of combining subspace projection and iterative denoising in terms of both algorithm convergence and performance. Our work demonstrated the potential of integrating self-supervised denoising priors and low-dimensional representations for high-dimensional imaging problems.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"3002-3011"},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10945499","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143734386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信