{"title":"MEFSCFFormer: Multiscale edge-aware fusion block with stereo cross Fourier transformer for stereo image super-resolution and diffusion-based image enhancement","authors":"Zihao Zhou, Yongfang Wang, Zhihui Gao","doi":"10.1016/j.displa.2025.103153","DOIUrl":null,"url":null,"abstract":"<div><div>Current stereo super-resolution (SR) methods present significant challenges in the effective exploitation of intra-view and inter-view features, especially in how to simultaneously maintain structural coherence and high-frequency detail recovery. To address these challenges, we propose Multiscale Edge-Aware Fusion Block with Stereo Cross Fourier Transformer(MEFSCFFormer) to better utilize intra-view and inter-view information for feature extraction, alignment and fusion. Proposed Multiscale Edge-Aware Fusion Block(MEFB) integrates the Multiscale Edge-Enhanced Mobile Convolution Block Module(MEMB) and the Multi-level Decentralized Mixed Pooled Spatial Attention Module(MDMPSA) to achieve efficient fusion of global and local features, which also combines edge information to better capture structural details that are consistent across viewpoints. To further enhance inter-view information, we design a Stereo Cross Fourier Transformer Module(SCFFormer) that adaptively selects and enhances cross-view-consistent frequency components in stereo images that contribute to the recovery. Besides the MEFSCFFormer can access the Diffusion model and fine-tune the supervised fine-tuning layer to further improve SR subjective quality. This approach overcomes the shortcomings of existing stereo image processing methods in viewpoint-consistent processing and significantly improves the accuracy and detail fidelity of stereo image restoration. We have conducted extensive experiments on several public datasets (Flickr1024 [<span><span>1</span></span>], KITTI2012 [<span><span>2</span></span>], KITTI2015 [<span><span>3</span></span>] and Middlebury [<span><span>4</span></span>]). The experimental results show that our method excels in several evaluation metrics compared to other state-of-the-art methods, especially in maintaining a new level of detail accuracy and structural consistency.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103153"},"PeriodicalIF":3.4000,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225001908","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Current stereo super-resolution (SR) methods present significant challenges in the effective exploitation of intra-view and inter-view features, especially in how to simultaneously maintain structural coherence and high-frequency detail recovery. To address these challenges, we propose Multiscale Edge-Aware Fusion Block with Stereo Cross Fourier Transformer(MEFSCFFormer) to better utilize intra-view and inter-view information for feature extraction, alignment and fusion. Proposed Multiscale Edge-Aware Fusion Block(MEFB) integrates the Multiscale Edge-Enhanced Mobile Convolution Block Module(MEMB) and the Multi-level Decentralized Mixed Pooled Spatial Attention Module(MDMPSA) to achieve efficient fusion of global and local features, which also combines edge information to better capture structural details that are consistent across viewpoints. To further enhance inter-view information, we design a Stereo Cross Fourier Transformer Module(SCFFormer) that adaptively selects and enhances cross-view-consistent frequency components in stereo images that contribute to the recovery. Besides the MEFSCFFormer can access the Diffusion model and fine-tune the supervised fine-tuning layer to further improve SR subjective quality. This approach overcomes the shortcomings of existing stereo image processing methods in viewpoint-consistent processing and significantly improves the accuracy and detail fidelity of stereo image restoration. We have conducted extensive experiments on several public datasets (Flickr1024 [1], KITTI2012 [2], KITTI2015 [3] and Middlebury [4]). The experimental results show that our method excels in several evaluation metrics compared to other state-of-the-art methods, especially in maintaining a new level of detail accuracy and structural consistency.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.