Chaowei Fang , Bolin Fu , De Cheng , Lechao Cheng , Dingwen Zhang
{"title":"用于磁共振图像超分辨率的高频结构变压器","authors":"Chaowei Fang , Bolin Fu , De Cheng , Lechao Cheng , Dingwen Zhang","doi":"10.1016/j.patcog.2025.112037","DOIUrl":null,"url":null,"abstract":"<div><div>Magnetic Resonance (MR) imaging is essential in clinical diagnostics due to its ability to capture detailed soft tissue structures. However, acquiring high-resolution MR images is expensive and often leads to reduced signal-to-noise ratios. To address this, MR image super-resolution aims to generate high-resolution images from low-resolution inputs. While deep neural networks have been widely applied for MR image super-resolution, they struggle to effectively utilize structural information critical for accurate reconstruction. This paper introduces a novel Transformer-based framework for super-resolving T2-weighted MR image which is a critical MR imaging modality. This framework excels in leveraging both intra-modality and inter-modality dependencies to enhance the structural information. The innovative component of our proposed architecture is termed as High-frequency Structure Transformer (HFST) which operates on the gradients of input images, leveraging the high-frequency structure prior. It also employs high-resolution T1-weighted images which is a more efficient MR imaging modality to provide substantial inter-modality structure priors for the processing of low-resolution T2-weighted images. HFST is featured by parallel intra-modality and inter-modality context exploration and window-based self-attention modules. Notably, both intra-head and inter-head correlations are incorporated to build up the self-attention modules, amplifying the relation extraction capacity. Rigorous evaluations on three benchmarks including IXI, BraTS2018, and fastMRI reveal that our method sets a new state of the art in MR image super-resolution. Especially, our method increases the PSNR metric by up to 1.28 dB under the <span><math><mrow><mn>4</mn><mo>×</mo></mrow></math></span> super-resolution setting. Our codes are available at <span><span>https://github.com/dummerchen/HFST</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"170 ","pages":"Article 112037"},"PeriodicalIF":7.6000,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"High-frequency structure transformer for magnetic resonance image super-resolution\",\"authors\":\"Chaowei Fang , Bolin Fu , De Cheng , Lechao Cheng , Dingwen Zhang\",\"doi\":\"10.1016/j.patcog.2025.112037\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Magnetic Resonance (MR) imaging is essential in clinical diagnostics due to its ability to capture detailed soft tissue structures. However, acquiring high-resolution MR images is expensive and often leads to reduced signal-to-noise ratios. To address this, MR image super-resolution aims to generate high-resolution images from low-resolution inputs. While deep neural networks have been widely applied for MR image super-resolution, they struggle to effectively utilize structural information critical for accurate reconstruction. This paper introduces a novel Transformer-based framework for super-resolving T2-weighted MR image which is a critical MR imaging modality. This framework excels in leveraging both intra-modality and inter-modality dependencies to enhance the structural information. The innovative component of our proposed architecture is termed as High-frequency Structure Transformer (HFST) which operates on the gradients of input images, leveraging the high-frequency structure prior. It also employs high-resolution T1-weighted images which is a more efficient MR imaging modality to provide substantial inter-modality structure priors for the processing of low-resolution T2-weighted images. HFST is featured by parallel intra-modality and inter-modality context exploration and window-based self-attention modules. Notably, both intra-head and inter-head correlations are incorporated to build up the self-attention modules, amplifying the relation extraction capacity. Rigorous evaluations on three benchmarks including IXI, BraTS2018, and fastMRI reveal that our method sets a new state of the art in MR image super-resolution. Especially, our method increases the PSNR metric by up to 1.28 dB under the <span><math><mrow><mn>4</mn><mo>×</mo></mrow></math></span> super-resolution setting. Our codes are available at <span><span>https://github.com/dummerchen/HFST</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49713,\"journal\":{\"name\":\"Pattern Recognition\",\"volume\":\"170 \",\"pages\":\"Article 112037\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2025-07-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0031320325006971\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325006971","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
High-frequency structure transformer for magnetic resonance image super-resolution
Magnetic Resonance (MR) imaging is essential in clinical diagnostics due to its ability to capture detailed soft tissue structures. However, acquiring high-resolution MR images is expensive and often leads to reduced signal-to-noise ratios. To address this, MR image super-resolution aims to generate high-resolution images from low-resolution inputs. While deep neural networks have been widely applied for MR image super-resolution, they struggle to effectively utilize structural information critical for accurate reconstruction. This paper introduces a novel Transformer-based framework for super-resolving T2-weighted MR image which is a critical MR imaging modality. This framework excels in leveraging both intra-modality and inter-modality dependencies to enhance the structural information. The innovative component of our proposed architecture is termed as High-frequency Structure Transformer (HFST) which operates on the gradients of input images, leveraging the high-frequency structure prior. It also employs high-resolution T1-weighted images which is a more efficient MR imaging modality to provide substantial inter-modality structure priors for the processing of low-resolution T2-weighted images. HFST is featured by parallel intra-modality and inter-modality context exploration and window-based self-attention modules. Notably, both intra-head and inter-head correlations are incorporated to build up the self-attention modules, amplifying the relation extraction capacity. Rigorous evaluations on three benchmarks including IXI, BraTS2018, and fastMRI reveal that our method sets a new state of the art in MR image super-resolution. Especially, our method increases the PSNR metric by up to 1.28 dB under the super-resolution setting. Our codes are available at https://github.com/dummerchen/HFST.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.