Wanyu Wu;Wei Wang;Zheng Wang;Kui Jiang;Zhengguo Li
{"title":"For Overall Nighttime Visibility: Integrate Irregular Glow Removal With Glow-Aware Enhancement","authors":"Wanyu Wu;Wei Wang;Zheng Wang;Kui Jiang;Zhengguo Li","doi":"10.1109/TCSVT.2024.3465670","DOIUrl":null,"url":null,"abstract":"Current low-light image enhancement (LLIE) techniques truly enhance luminance but have limited exploration on another harmful factor of nighttime visibility, the glow effects with multiple shapes in the real world. The presence of glow is inevitable due to widespread artificial light sources, and direct enhancement can cause further glow diffusion. In the pursuit of Overall Nighttime Visibility Enhancement (ONVE), we propose a physical model guided framework ONVE to derive a Nighttime Imaging Model with Near-Field Light Sources (NIM-NLS), whose APSF prior generator is validated efficiently in six categories of glow shapes. Guided by this physical-world model as domain knowledge, we subsequently develop an extensible Light-aware Blind Deconvolution Network (LBDN) to face the blind decomposition challenge on direct transmission map D and light source map G based on APSF. Then, an innovative Glow-guided Retinex-based progressive Enhancement module (GRE) is introduced as a further optimization on reflection R from D to harmonize the conflict of glow removal and brightness boost. Notably, ONVE is an unsupervised framework based on a zero-shot learning strategy and uses physical domain knowledge to form the overall pipeline and network. Empirical evaluations on multiple datasets validate the remarkable efficacy of the proposed ONVE in improving nighttime visibility and performance of high-level vision tasks.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 1","pages":"823-837"},"PeriodicalIF":8.3000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10685529/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Current low-light image enhancement (LLIE) techniques truly enhance luminance but have limited exploration on another harmful factor of nighttime visibility, the glow effects with multiple shapes in the real world. The presence of glow is inevitable due to widespread artificial light sources, and direct enhancement can cause further glow diffusion. In the pursuit of Overall Nighttime Visibility Enhancement (ONVE), we propose a physical model guided framework ONVE to derive a Nighttime Imaging Model with Near-Field Light Sources (NIM-NLS), whose APSF prior generator is validated efficiently in six categories of glow shapes. Guided by this physical-world model as domain knowledge, we subsequently develop an extensible Light-aware Blind Deconvolution Network (LBDN) to face the blind decomposition challenge on direct transmission map D and light source map G based on APSF. Then, an innovative Glow-guided Retinex-based progressive Enhancement module (GRE) is introduced as a further optimization on reflection R from D to harmonize the conflict of glow removal and brightness boost. Notably, ONVE is an unsupervised framework based on a zero-shot learning strategy and uses physical domain knowledge to form the overall pipeline and network. Empirical evaluations on multiple datasets validate the remarkable efficacy of the proposed ONVE in improving nighttime visibility and performance of high-level vision tasks.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.