{"title":"具有对齐辅助的鲁棒深度卷积字典模型用于多对比MRI超分辨率","authors":"Pengcheng Lei;Miaomiao Zhang;Faming Fang;Guixu Zhang","doi":"10.1109/TMI.2025.3563523","DOIUrl":null,"url":null,"abstract":"Multi-contrast magnetic resonance imaging (MCMRI) super-resolution (SR) methods aims to leverage the complementary information present in multi-contrast images. However, existing methods encounter several limitations. Firstly, most current networks fail to appropriately model the correlations of multi-contrast images and lack certain interpretability. Secondly, they often overlook the negative impact of spatial misalignment between modalities in clinical practice. Thirdly, existing methods do not effectively constrain the complementary information learned between multi-contrast images, resulting in information redundancy and limiting their model performance. In this paper, we propose a robust alignment-assisted multi-contrast convolutional dictionary (A2-CDic) model to address these challenges. Specifically, we develop an observation model based on convolutional sparse coding to explicitly represent multi-contrast images as common (e.g., consistent textures) and unique (e.g., inconsistent structures and contrasts) components. Considering there are spatial misalignments in real-world multi-contrast images, we incorporate a spatial alignment module to compensate for the misaligned structures. This approach enables the proposed model to fully exploit the valuable information in the reference image while mitigating interference from inconsistent information. We employ the proximal gradient algorithm to optimize the model and unroll the iterative steps into a multi-scale convolutional dictionary network. Furthermore, we utilize mutual information losses to constrain the extracted common and unique components. This constraint reduces the redundancy between the decomposed components, allowing each sub-module to learn more representative features. We evaluate our model on four publicly available datasets comprising internal, external, spatially aligned, and misaligned MCMRI images. The experimental results demonstrate that our model surpasses existing state-of-the-art MCMRI SR methods in terms of both generalization ability and overall performance. Code is available at <uri>https://github.com/lpcccc-cv/A2-CDic</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 8","pages":"3383-3396"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robust Deep Convolutional Dictionary Model With Alignment Assistance for Multi-Contrast MRI Super-Resolution\",\"authors\":\"Pengcheng Lei;Miaomiao Zhang;Faming Fang;Guixu Zhang\",\"doi\":\"10.1109/TMI.2025.3563523\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multi-contrast magnetic resonance imaging (MCMRI) super-resolution (SR) methods aims to leverage the complementary information present in multi-contrast images. However, existing methods encounter several limitations. Firstly, most current networks fail to appropriately model the correlations of multi-contrast images and lack certain interpretability. Secondly, they often overlook the negative impact of spatial misalignment between modalities in clinical practice. Thirdly, existing methods do not effectively constrain the complementary information learned between multi-contrast images, resulting in information redundancy and limiting their model performance. In this paper, we propose a robust alignment-assisted multi-contrast convolutional dictionary (A2-CDic) model to address these challenges. Specifically, we develop an observation model based on convolutional sparse coding to explicitly represent multi-contrast images as common (e.g., consistent textures) and unique (e.g., inconsistent structures and contrasts) components. Considering there are spatial misalignments in real-world multi-contrast images, we incorporate a spatial alignment module to compensate for the misaligned structures. This approach enables the proposed model to fully exploit the valuable information in the reference image while mitigating interference from inconsistent information. We employ the proximal gradient algorithm to optimize the model and unroll the iterative steps into a multi-scale convolutional dictionary network. Furthermore, we utilize mutual information losses to constrain the extracted common and unique components. This constraint reduces the redundancy between the decomposed components, allowing each sub-module to learn more representative features. We evaluate our model on four publicly available datasets comprising internal, external, spatially aligned, and misaligned MCMRI images. The experimental results demonstrate that our model surpasses existing state-of-the-art MCMRI SR methods in terms of both generalization ability and overall performance. Code is available at <uri>https://github.com/lpcccc-cv/A2-CDic</uri>.\",\"PeriodicalId\":94033,\"journal\":{\"name\":\"IEEE transactions on medical imaging\",\"volume\":\"44 8\",\"pages\":\"3383-3396\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on medical imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10975066/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10975066/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust Deep Convolutional Dictionary Model With Alignment Assistance for Multi-Contrast MRI Super-Resolution
Multi-contrast magnetic resonance imaging (MCMRI) super-resolution (SR) methods aims to leverage the complementary information present in multi-contrast images. However, existing methods encounter several limitations. Firstly, most current networks fail to appropriately model the correlations of multi-contrast images and lack certain interpretability. Secondly, they often overlook the negative impact of spatial misalignment between modalities in clinical practice. Thirdly, existing methods do not effectively constrain the complementary information learned between multi-contrast images, resulting in information redundancy and limiting their model performance. In this paper, we propose a robust alignment-assisted multi-contrast convolutional dictionary (A2-CDic) model to address these challenges. Specifically, we develop an observation model based on convolutional sparse coding to explicitly represent multi-contrast images as common (e.g., consistent textures) and unique (e.g., inconsistent structures and contrasts) components. Considering there are spatial misalignments in real-world multi-contrast images, we incorporate a spatial alignment module to compensate for the misaligned structures. This approach enables the proposed model to fully exploit the valuable information in the reference image while mitigating interference from inconsistent information. We employ the proximal gradient algorithm to optimize the model and unroll the iterative steps into a multi-scale convolutional dictionary network. Furthermore, we utilize mutual information losses to constrain the extracted common and unique components. This constraint reduces the redundancy between the decomposed components, allowing each sub-module to learn more representative features. We evaluate our model on four publicly available datasets comprising internal, external, spatially aligned, and misaligned MCMRI images. The experimental results demonstrate that our model surpasses existing state-of-the-art MCMRI SR methods in terms of both generalization ability and overall performance. Code is available at https://github.com/lpcccc-cv/A2-CDic.