IEEE transactions on medical imaging最新文献

筛选
英文 中文
Mutualistic Multi-Network Noisy Label Learning (MMNNLL) Method and Its Application to Transdiagnostic Classification of Bipolar Disorder and Schizophrenia. 互惠多网络噪声标签学习(MMNNLL)方法及其在双相情感障碍和精神分裂症跨诊断分类中的应用。
IEEE transactions on medical imaging Pub Date : 2025-07-04 DOI: 10.1109/TMI.2025.3585880
Yuhui Du, Zheng Wang, Ju Niu, Yulong Wang, Godfrey D Pearlson, Vince D Calhoun
{"title":"Mutualistic Multi-Network Noisy Label Learning (MMNNLL) Method and Its Application to Transdiagnostic Classification of Bipolar Disorder and Schizophrenia.","authors":"Yuhui Du, Zheng Wang, Ju Niu, Yulong Wang, Godfrey D Pearlson, Vince D Calhoun","doi":"10.1109/TMI.2025.3585880","DOIUrl":"https://doi.org/10.1109/TMI.2025.3585880","url":null,"abstract":"<p><p>The subjective nature of diagnosing mental disorders complicates achieving accurate diagnoses. The complex relationship among disorders further exacerbates this issue, particularly in clinical practice where conditions like bipolar disorder (BP) and schizophrenia (SZ) can present similar clinical symptoms and cognitive impairments. To address these challenges, this paper proposes a mutualistic multi-network noisy label learning (MMNNLL) method, which aims to enhance diagnostic accuracy by leveraging neuroimaging data under the presence of potential clinical diagnosis bias or errors. MMNNLL effectively utilizes multiple deep neural networks (DNNs) for learning from data with noisy labels by maximizing the consistency among DNNs in identifying and utilizing samples with clean and noisy labels. Experimental results on public CIFAR-10 and PathMNIST datasets demonstrate the effectiveness of our method in classifying independent test data across various types and levels of label noise. Additionally, our MMNNLL method significantly outperforms state-of-the-art noisy label learning methods. When applied to brain functional connectivity data from BP and SZ patients, our method identifies two biotypes that show more pronounced group differences, and improved classification accuracy compared to the original clinical categories, using both traditional machine learning and advanced deep learning techniques. In summary, our method effectively addresses the possible inaccuracy in nosology of mental disorders and achieves transdiagnostic classification through robust noisy label learning via multi-network collaboration and competition.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144565572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Chain of Diagnosis Framework for Accurate and Explainable Radiology Report Generation. 生成准确和可解释的放射学报告的诊断链框架。
IEEE transactions on medical imaging Pub Date : 2025-07-03 DOI: 10.1109/TMI.2025.3585765
Haibo Jin, Haoxuan Che, Sunan He, Hao Chen
{"title":"A Chain of Diagnosis Framework for Accurate and Explainable Radiology Report Generation.","authors":"Haibo Jin, Haoxuan Che, Sunan He, Hao Chen","doi":"10.1109/TMI.2025.3585765","DOIUrl":"https://doi.org/10.1109/TMI.2025.3585765","url":null,"abstract":"<p><p>Despite the progress of radiology report generation (RRG), existing works face two challenges: 1) The performances in clinical efficacy are unsatisfactory, especially for lesion attributes description; 2) the generated text lacks explainability, making it difficult for radiologists to trust the results. To address the challenges, we focus on a trustworthy RRG model, which not only generates accurate descriptions of abnormalities, but also provides basis of its predictions. To this end, we propose a framework named chain of diagnosis (CoD), which maintains a chain of diagnostic process for clinically accurate and explainable RRG. It first generates question-answer (QA) pairs via diagnostic conversation to extract key findings, then prompts a large language model with QA diagnoses for accurate generation. To enhance explainability, a diagnosis grounding module is designed to match QA diagnoses and generated sentences, where the diagnoses act as a reference. Moreover, a lesion grounding module is designed to locate abnormalities in the image, further improving the working efficiency of radiologists. To facilitate label-efficient training, we propose an omni-supervised learning strategy with clinical consistency to leverage various types of annotations from different datasets. Our efforts lead to 1) an omni-labeled RRG dataset with QA pairs and lesion boxes; 2) a evaluation tool for assessing the accuracy of reports in describing lesion location and severity; 3) extensive experiments to demonstrate the effectiveness of CoD, where it outperforms both specialist and generalist models consistently on two RRG benchmarks and shows promising explainability by accurately grounding generated sentences to QA diagnoses and images.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144562423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Shape Reconstruction and Registration via a Shared Hybrid Diffeomorphic Flow. 基于共享混合差胚流的关节形状重建与配准。
IEEE transactions on medical imaging Pub Date : 2025-07-03 DOI: 10.1109/TMI.2025.3585560
Hengxiang Shi, Ping Wang, Shouhui Zhang, Xiuyang Zhao, Bo Yang, Caiming Zhang
{"title":"Joint Shape Reconstruction and Registration via a Shared Hybrid Diffeomorphic Flow.","authors":"Hengxiang Shi, Ping Wang, Shouhui Zhang, Xiuyang Zhao, Bo Yang, Caiming Zhang","doi":"10.1109/TMI.2025.3585560","DOIUrl":"https://doi.org/10.1109/TMI.2025.3585560","url":null,"abstract":"<p><p>Deep implicit functions (DIFs) effectively represent shapes by using a neural network to map 3D spatial coordinates to scalar values that encode the shape's geometry, but it is difficult to establish correspondences between shapes directly, limiting their use in medical image registration. The recently presented deformation field-based methods achieve implicit templates learning via template field learning with DIFs and deformation field learning, establishing shape correspondence through deformation fields. Although these approaches enable joint learning of shape representation and shape correspondence, the decoupled optimization for template field and deformation field, caused by the absence of deformation annotations lead to a relatively accurate template field but an underoptimized deformation field. In this paper, we propose a novel implicit template learning framework via a shared hybrid diffeomorphic flow (SHDF), which enables shared optimization for deformation and template, contributing to better deformations and shape representation. Specifically, we formulate the signed distance function (SDF, a type of DIFs) as a one-dimensional (1D) integral, unifying dimensions to match the form used in solving ordinary differential equation (ODE) for deformation field learning. Then, SDF in 1D integral form is integrated seamlessly into the deformation field learning. Using a recurrent learning strategy, we frame shape representations and deformations as solving different initial value problems of the same ODE. We also introduce a global smoothness regularization to handle local optima due to limited outside-of-shape data. Experiments on medical datasets show that SHDF outperforms state-of-the-art methods in shape representation and registration.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144562424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flexible Individualized Developmental Prediction of Infant Cortical Surface Maps via Intensive Triplet Autoencoder 基于强化三联体自编码器的婴儿皮质表面图灵活个性化发展预测
IEEE transactions on medical imaging Pub Date : 2025-04-21 DOI: 10.1109/TMI.2025.3562003
Xinrui Yuan;Jiale Cheng;Fenqiang Zhao;Zhengwang Wu;Li Wang;Weili Lin;Yu Zhang;Ruiyuan Liu;Gang Li
{"title":"Flexible Individualized Developmental Prediction of Infant Cortical Surface Maps via Intensive Triplet Autoencoder","authors":"Xinrui Yuan;Jiale Cheng;Fenqiang Zhao;Zhengwang Wu;Li Wang;Weili Lin;Yu Zhang;Ruiyuan Liu;Gang Li","doi":"10.1109/TMI.2025.3562003","DOIUrl":"10.1109/TMI.2025.3562003","url":null,"abstract":"Computational methods for prediction of the dynamic and complex development of the infant cerebral cortex are critical and highly desired for a better understanding of early brain development in health and disease. Although a few methods have been proposed, they are limited to predicting cortical surface maps at predefined ages and require a large amount of strictly paired longitudinal data at these ages for model training. However, longitudinal infant images are typically acquired at highly irregular and nonuniform scanning ages, thus leading to limited training data for these methods and low flexibility and accuracy. To address these issues, we propose a flexible framework for individualized prediction of cortical surface maps at arbitrary ages during infancy. The central idea is that a cortical surface map can be considered as an entangled representation of two distinct components: 1) the identity-related invariant features, which preserve the individual identity and 2) the age-related features, which reflect the developmental patterns. Our framework, called intensive triplet autoencoder, extracts the mixed latent feature and further disentangles it into two components with an attention-based module. Identity recognition and age estimation tasks are introduced as supervision for a reliable disentanglement. Thus, we can obtain the target individualized cortical property maps with disentangled identity-related information with specific age-related information. Moreover, an adversarial learning strategy is integrated to achieve a vivid and realistic prediction. Extensive experiments validate our method’s superior capability in predicting early developing cortical surface maps flexibly and precisely, in comparison with existing methods.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"3110-3122"},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143857708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DistAL: A Domain-Shift Active Learning Framework With Transferable Feature Learning for Lesion Detection DistAL: 利用可转移特征学习进行病变检测的领域转移主动学习框架
IEEE transactions on medical imaging Pub Date : 2025-04-14 DOI: 10.1109/TMI.2025.3558861
Fan Bai;Ran Wei;Xiaoyu Bai;Dakai Jin;Xianghua Ye;Le Lu;Ke Yan;Max Q.-H. Meng
{"title":"DistAL: A Domain-Shift Active Learning Framework With Transferable Feature Learning for Lesion Detection","authors":"Fan Bai;Ran Wei;Xiaoyu Bai;Dakai Jin;Xianghua Ye;Le Lu;Ke Yan;Max Q.-H. Meng","doi":"10.1109/TMI.2025.3558861","DOIUrl":"10.1109/TMI.2025.3558861","url":null,"abstract":"Deep learning has demonstrated exceptional performance in medical image analysis, but its effectiveness degrades significantly when applied to different medical centers due to domain shifts. Lesion detection, a critical task in medical imaging, is particularly impacted by this challenge due to the diversity and complexity of lesions, which can arise from different organs, diseases, imaging devices, and other factors. While collecting data and labels from target domains is a feasible solution, annotating medical images is often tedious, expensive, and requires professionals. To address this problem, we combine active learning with domain-invariant feature learning. We propose a Domain-shift Active Learning (DistAL) framework, which includes a transferable feature learning algorithm and a hybrid sample selection strategy. Feature learning incorporates contrastive-consistency training to learn discriminative and domain-invariant features. The sample selection strategy is called RUDY, which jointly considers Representativeness, Uncertainty, and DiversitY. Its goal is to select samples from the unlabeled target domain for cost-effective annotation. It first selects representative samples to deal with domain shift, as well as uncertain ones to improve class separability, and then leverages K-means++ initialization to remove redundant candidates to achieve diversity. We evaluate our method for the task of lesion detection. By selecting only 1.7% samples from the target domain to annotate, DistAL achieves comparable performance to the method trained with all target labels. It outperforms other AL methods in five experiments on eight datasets collected from different hospitals, using different imaging protocols, annotation conventions, and etiologies.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"3038-3050"},"PeriodicalIF":0.0,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143831792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PFCM: Poisson Flow Consistency Models for Low-Dose CT Image Denoising 低剂量CT图像去噪的泊松流一致性模型
IEEE transactions on medical imaging Pub Date : 2025-04-11 DOI: 10.1109/TMI.2025.3558019
Dennis Hein;Grant Stevens;Adam Wang;Ge Wang
{"title":"PFCM: Poisson Flow Consistency Models for Low-Dose CT Image Denoising","authors":"Dennis Hein;Grant Stevens;Adam Wang;Ge Wang","doi":"10.1109/TMI.2025.3558019","DOIUrl":"10.1109/TMI.2025.3558019","url":null,"abstract":"X-ray computed tomography (CT) is widely used for medical diagnosis and treatment planning; however, concerns about ionizing radiation exposure drive efforts to optimize image quality at lower doses. This study introduces Poisson Flow Consistency Models (PFCM), a novel family of deep generative models that combines the robustness of PFGM++ with the efficient single-step sampling of consistency models. PFCM are derived by generalizing consistency distillation to PFGM++ through a change-of-variables and an updated noise distribution. As a distilled version of PFGM++, PFCM inherit the ability to trade off robustness for rigidity via the hyperparameter <inline-formula> <tex-math>$text {D} in text {(}{0},infty text {)}$ </tex-math></inline-formula>. A fact that we exploit to adapt this novel generative model for the task of low-dose CT image denoising, via a “task-specific” sampler that “hijacks” the generative process by replacing an intermediate state with the low-dose CT image. While this “hijacking” introduces a severe mismatch—the noise characteristics of low-dose CT images are different from that of intermediate states in the Poisson flow process—we show that the inherent robustness of PFCM at small D effectively mitigates this issue. The resulting sampler achieves excellent performance in terms of LPIPS, SSIM, and PSNR on the Mayo low-dose CT dataset. By contrast, an analogous sampler based on standard consistency models is found to be significantly less robust under the same conditions, highlighting the importance of a tunable D afforded by our novel framework. To highlight generalizability, we show effective denoising of clinical images from a prototype photon-counting system reconstructed using a sharper kernel and at a range of energy levels.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2989-3001"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143822793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiparametric Ultrasound Breast Tumors Diagnosis Within BI-RADS Category 4 via Feature Disentanglement and Cross-Fusion 通过特征分离和交叉融合进行 BI-RADS 第 4 类多参数超声乳腺肿瘤诊断
IEEE transactions on medical imaging Pub Date : 2025-04-08 DOI: 10.1109/TMI.2025.3558786
Zhikai Ruan;Canxu Song;Pengfei Xu;Chaoyu Wang;Jing Zhao;Meng Chen;Suoni Li;Qiang Su;Xiaozhen Zhuo;Yue Wu;Mingxi Wan;Diya Wang
{"title":"Multiparametric Ultrasound Breast Tumors Diagnosis Within BI-RADS Category 4 via Feature Disentanglement and Cross-Fusion","authors":"Zhikai Ruan;Canxu Song;Pengfei Xu;Chaoyu Wang;Jing Zhao;Meng Chen;Suoni Li;Qiang Su;Xiaozhen Zhuo;Yue Wu;Mingxi Wan;Diya Wang","doi":"10.1109/TMI.2025.3558786","DOIUrl":"10.1109/TMI.2025.3558786","url":null,"abstract":"BI-RADS category 4 is the diagnostic threshold between benign and malignant breast tumors and is critical in determining clinical breast cancer treatment options. However, breast tumors within BI-RADS category 4 tend to show subtle or contradictory differences between benign and malignant on B-mode images, leading to uncertainty in clinical diagnosis. Recently, many deep learning studies have realized the value of multimodal and multiparametric ultrasound in the diagnosis of breast tumors. However, due to the heterogeneity of data, how to effectively represent and fuse common and specific features from multiple sources of information is an open question, which is often overlooked by existing computer-aided diagnosis methods. To address these problems, we propose a novel framework that integrates multiparametric ultrasound information (B-mode images, Nakagami parametric images, and semantic attributes) to assist the diagnosis of BI-RADS 4 breast tumors. The framework extracts and disentangles common and specific features from B-mode and Nakagami parametric images based on a dual-branch Transformer-CNN encoder. Meanwhile, we propose a novel feature disentanglement loss to further ensure the complementarity and consistency of multiparametric features. In addition, we construct a multiparameter cross-fusion module to integrate the high-level features extracted from multiparametric images and semantic attributes. Extensive experiments on the multicenter multiparametric dataset demonstrated the superiority of the proposed framework over the state-of-the-art methods in the diagnosis for BI-RADS 4 breast tumors. The code is available at <uri>https://github.com/rzk-code/MUBTD</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"3064-3075"},"PeriodicalIF":0.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revealing Cortical Spreading Pathway of Neuropathological Events by Neural Optimal Mass Transport 通过神经最优质量传递揭示神经病理事件的皮层扩散通路
IEEE transactions on medical imaging Pub Date : 2025-04-07 DOI: 10.1109/TMI.2025.3558691
Tingting Dan;Yanquan Huang;Yang Yang;Guorong Wu
{"title":"Revealing Cortical Spreading Pathway of Neuropathological Events by Neural Optimal Mass Transport","authors":"Tingting Dan;Yanquan Huang;Yang Yang;Guorong Wu","doi":"10.1109/TMI.2025.3558691","DOIUrl":"10.1109/TMI.2025.3558691","url":null,"abstract":"Positron Emission Tomography (PET) is essential for understanding the pathophysiological mechanisms underlying neurodegenerative diseases like Alzheimer’s disease (AD). However, existing approaches primarily focus on stereotypical patterns of pathology burden, lacking the ability to elucidate the underlying propagation mechanisms by which pathologies spread throughout the brain over time. Given that many neurodegenerative diseases exhibit prion-like pathology spread, it is essential to uncover the spot-to-spot flow field between consecutive PET snapshots. To address this, we reformulate the problem of identifying latent cortical propagation pathways of neuropathological burden within the well-established framework of optimal mass transport (OMT). In this formulation, the dynamic spreading of pathology across longitudinal PET scans is inherently constrained by the geometry of the brain cortex. To solve this problem, we introduce a variational framework that characterizes the dynamical system of pathology propagation in the brain, ultimately reducing to a Wasserstein geodesic between two density distributions of pathology accumulation. Furthermore, we hypothesize that a well-characterized mechanism of pathology propagation will enable the prediction of future pathology accumulation at the individual level, paving the way for personalized disease progression modeling. Building on the principles of physics-informed deep models, we derive the governing equation of the underlying OMT model and introduce an explainable, generative adversarial network-inspired framework. Our approach (1) parameterizes population-level OMT dynamics through a flow adjuster and (2) predicts the spreading flow in unseen subjects using a trained flow driver. We validate the accuracy of our model on publicly available datasets, demonstrating its effectiveness in forecasting future pathology accumulation. Since our deep model adheres to the second law of thermodynamics, we further explore the propagation dynamics of tau aggregates throughout the progression of AD. In contrast to traditional methods, our physics-informed approach enhances both accuracy and interpretability, demonstrating its potential to reveal novel neurobiological mechanisms driving disease progression.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"3100-3109"},"PeriodicalIF":0.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143797729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Frequency Modulated Transformer for Multi-Contrast MRI Super-Resolution 多对比MRI超分辨率高频调制变压器
IEEE transactions on medical imaging Pub Date : 2025-04-04 DOI: 10.1109/TMI.2025.3558164
Juncheng Li;Hanhui Yang;Qiaosi Yi;Minhua Lu;Jun Shi;Tieyong Zeng
{"title":"High-Frequency Modulated Transformer for Multi-Contrast MRI Super-Resolution","authors":"Juncheng Li;Hanhui Yang;Qiaosi Yi;Minhua Lu;Jun Shi;Tieyong Zeng","doi":"10.1109/TMI.2025.3558164","DOIUrl":"10.1109/TMI.2025.3558164","url":null,"abstract":"Accelerating the MRI acquisition process is always a key issue in modern medical practice, and great efforts have been devoted to fast MR imaging. Among them, multi-contrast MR imaging is a promising and effective solution that utilizes and combines information from different contrasts. However, existing methods may ignore the importance of the high-frequency priors among different contrasts. Moreover, they may lack an efficient method to fully utilize the information from the reference contrast. In this paper, we propose a lightweight and accurate High-frequency Modulated Transformer (HFMT) for multi-contrast MRI super-resolution. The key ideas of HFMT are high-frequency prior enhancement and its fusion with global features. Specifically, we employ an enhancement module to enhance and amplify the high-frequency priors in the reference and target modalities. In addition, we utilize the Rectangle Window Transformer Block (RWTB) to capture global information in the target contrast. Meanwhile, we propose a novel cross-attention mechanism to fuse the high-frequency enhanced features with the global features sequentially, which assists the network in recovering clear texture details from the low-resolution inputs. Extensive experiments show that our proposed method can reconstruct high-quality images with fewer parameters and faster inference time.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"3089-3099"},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10949290","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrections to “Multi-Label Generalized Zero Shot Chest X-Ray Classification By Combining Image-Text Information With Feature Disentanglement” “结合图像-文本信息和特征解纠缠的多标签广义零射胸部x射线分类”的修正
IEEE transactions on medical imaging Pub Date : 2025-04-03 DOI: 10.1109/TMI.2025.3549666
Dwarikanath Mahapatra;Antonio Jimeno Yepes;Behzad Bozorgtabar;Sudipta Roy;Zongyuan Ge;Mauricio Reyes
{"title":"Corrections to “Multi-Label Generalized Zero Shot Chest X-Ray Classification By Combining Image-Text Information With Feature Disentanglement”","authors":"Dwarikanath Mahapatra;Antonio Jimeno Yepes;Behzad Bozorgtabar;Sudipta Roy;Zongyuan Ge;Mauricio Reyes","doi":"10.1109/TMI.2025.3549666","DOIUrl":"10.1109/TMI.2025.3549666","url":null,"abstract":"Presents corrections to the paper, (Corrections to “Multi-Label Generalized Zero Shot Chest X-Ray Classification By Combining Image-Text Information With Feature Disentanglement”).","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1984-1985"},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10948537","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信