{"title":"Multi-Contrast MRI Arbitrary-Scale Super-Resolution via Dynamic Implicit Network","authors":"Jinbao Wei;Gang Yang;Wei Wei;Aiping Liu;Xun Chen","doi":"10.1109/TCSVT.2025.3556210","DOIUrl":null,"url":null,"abstract":"Multi-contrast MRI super-resolution (SR) aims to restore high-resolution target image from low-resolution one, where reference image from another contrast is used to promote this task. To better meet clinical needs, current studies mainly focus on developing arbitrary-scale MRI SR solutions rather than fixed-scale ones. However, existing arbitrary-scale SR methods still suffer from the following two issues: 1) They typically rely on fixed convolutions to learn multi-contrast features, struggling to handle the feature transformations under varying scales and input image pairs, thus limiting their representation ability. 2) They simply combine the multi-contrast features as prior information, failing to fully exploit the complementary information in the texture-rich reference images. To address these issues, we propose a Dynamic Implicit Network (DINet) for multi-contrast MRI arbitrary-scale SR. DINet offers several key advantages. First, the scale-adaptive dynamic convolution facilitates dynamic feature learning based on scale factors and input image pairs, significantly enhancing the representation ability of multi-contrast features. Second, the dual-branch implicit attention enables arbitrary-scale upsampling of MR images through implicit neural representation. Following this, we propose the modulation-then-fusion block to adaptively align and fuse multi-contrast features, effectively incorporating complementary details from reference images into the target images. By jointly combining the above-mentioned modules, our proposed DINet achieves superior MRI SR performance at arbitrary scales. Extensive experiments on three datasets demonstrate that DINet significantly outperforms state-of-the-art methods, highlighting its potential for clinical applications. The code is available at <uri>https://github.com/weijinbao1998/DINet</uri>.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 9","pages":"8973-8988"},"PeriodicalIF":11.1000,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10945918/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-contrast MRI super-resolution (SR) aims to restore high-resolution target image from low-resolution one, where reference image from another contrast is used to promote this task. To better meet clinical needs, current studies mainly focus on developing arbitrary-scale MRI SR solutions rather than fixed-scale ones. However, existing arbitrary-scale SR methods still suffer from the following two issues: 1) They typically rely on fixed convolutions to learn multi-contrast features, struggling to handle the feature transformations under varying scales and input image pairs, thus limiting their representation ability. 2) They simply combine the multi-contrast features as prior information, failing to fully exploit the complementary information in the texture-rich reference images. To address these issues, we propose a Dynamic Implicit Network (DINet) for multi-contrast MRI arbitrary-scale SR. DINet offers several key advantages. First, the scale-adaptive dynamic convolution facilitates dynamic feature learning based on scale factors and input image pairs, significantly enhancing the representation ability of multi-contrast features. Second, the dual-branch implicit attention enables arbitrary-scale upsampling of MR images through implicit neural representation. Following this, we propose the modulation-then-fusion block to adaptively align and fuse multi-contrast features, effectively incorporating complementary details from reference images into the target images. By jointly combining the above-mentioned modules, our proposed DINet achieves superior MRI SR performance at arbitrary scales. Extensive experiments on three datasets demonstrate that DINet significantly outperforms state-of-the-art methods, highlighting its potential for clinical applications. The code is available at https://github.com/weijinbao1998/DINet.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.