{"title":"Learning enriched channel interactions for image dehazing and beyond","authors":"Abdul Hafeez Babar , Md Shamim Hossain , Weihua Tong , Naijie Gu , Zhangjin Huang","doi":"10.1016/j.displa.2025.103212","DOIUrl":null,"url":null,"abstract":"<div><div>Atmospheric haze degrades image clarity and impairs the performance of downstream computer visions tasks. Convolutional neural networks have demonstrated strong dehazing capabilities by exploiting neighborhood spatial patterns, while Vision Transformers excel at modeling long-range dependencies. However, existing methods suffers two challenges. First, inadequate modeling of inter-channel correlations leads to wavelength-dependent color distortions. Second, insufficient preservation of frequency-specific components results in blurred textures under non-uniform haze distributions. To tackle these limitations, we present the Dual-Domain Channel Attention Network (DDCA-Net), which integrates Spatial Channel Attention (SCA) and Frequency Channel Attention (FCA). The SCA module explicitly models spatial inter-channel dependencies to correct color imbalances, and the FCA module employs a multi-branch frequency decomposition mechanism to selectively restore high-frequency details attenuated by haze. This dual domain approach enables the precise reconstruction of fine-grained structures while enhancing overall image clarity. Extensive evaluations of nine benchmark datasets demonstrate consistent improvements over state-of-the-art methods. In particular, DDCA-Net achieves PSNR gains of 0.32 dB on RESIDE Indoor, 0.88 dB on SateHaze1K, and 1.79 dB on LOL-v2. Furthermore, our model yields significant boosts in downstream object detection and segmentation, confirming its practical utility. The code is available at <span><span>https://github.com/hafeezbabar/DDCA-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103212"},"PeriodicalIF":3.4000,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225002495","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Atmospheric haze degrades image clarity and impairs the performance of downstream computer visions tasks. Convolutional neural networks have demonstrated strong dehazing capabilities by exploiting neighborhood spatial patterns, while Vision Transformers excel at modeling long-range dependencies. However, existing methods suffers two challenges. First, inadequate modeling of inter-channel correlations leads to wavelength-dependent color distortions. Second, insufficient preservation of frequency-specific components results in blurred textures under non-uniform haze distributions. To tackle these limitations, we present the Dual-Domain Channel Attention Network (DDCA-Net), which integrates Spatial Channel Attention (SCA) and Frequency Channel Attention (FCA). The SCA module explicitly models spatial inter-channel dependencies to correct color imbalances, and the FCA module employs a multi-branch frequency decomposition mechanism to selectively restore high-frequency details attenuated by haze. This dual domain approach enables the precise reconstruction of fine-grained structures while enhancing overall image clarity. Extensive evaluations of nine benchmark datasets demonstrate consistent improvements over state-of-the-art methods. In particular, DDCA-Net achieves PSNR gains of 0.32 dB on RESIDE Indoor, 0.88 dB on SateHaze1K, and 1.79 dB on LOL-v2. Furthermore, our model yields significant boosts in downstream object detection and segmentation, confirming its practical utility. The code is available at https://github.com/hafeezbabar/DDCA-Net.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.