{"title":"FoggyDepth: Leveraging Channel Frequency and Non-Local Features for Depth Estimation in Fog","authors":"Mengjiao Shen;Liuyi Wang;Xianyou Zhong;Chengju Liu;Qijun Chen","doi":"10.1109/TCSVT.2024.3509696","DOIUrl":null,"url":null,"abstract":"With the development of computer vision technology, unsupervised depth estimation from single images has experienced significant advancements under normal weather conditions, demonstrating highly promising results. Nevertheless, its efficacy in estimating depth under less-than-optimal weather conditions, particularly those characterized by fog, continues to pose substantial challenges. In this paper, we propose FoggyDepth that is designed to utilize channel-wise Fourier transform to remedy this limitation. Specifically, to relieve the problem of photometric consistency assumption not holding in foggy scenes within the unsupervised framework, we employ a channel-dimension Fourier transform to obtain channel global statistical information, thereby enhancing the discriminative ability of global representation. Meanwhile, we generate a series of foggy scene samples corresponding to normal training samples and use them for self-supervised training to guide the model to accurately recover depth in foggy conditions. In addition, to further improve the model performance, we utilize a non-local network to capture long-range spatial dependencies in depth estimation. Comprehensive evaluations conducted on the Oxford RobotCar, nuScenes, and Driving Stereo datasets substantiate the precision and reliability of our proposed method. Through a meticulous comparison with existing leading-edge algorithms in depth estimation, our approach demonstrates superior performance, both qualitatively and quantitatively.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 4","pages":"3589-3602"},"PeriodicalIF":11.1000,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10772035/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
With the development of computer vision technology, unsupervised depth estimation from single images has experienced significant advancements under normal weather conditions, demonstrating highly promising results. Nevertheless, its efficacy in estimating depth under less-than-optimal weather conditions, particularly those characterized by fog, continues to pose substantial challenges. In this paper, we propose FoggyDepth that is designed to utilize channel-wise Fourier transform to remedy this limitation. Specifically, to relieve the problem of photometric consistency assumption not holding in foggy scenes within the unsupervised framework, we employ a channel-dimension Fourier transform to obtain channel global statistical information, thereby enhancing the discriminative ability of global representation. Meanwhile, we generate a series of foggy scene samples corresponding to normal training samples and use them for self-supervised training to guide the model to accurately recover depth in foggy conditions. In addition, to further improve the model performance, we utilize a non-local network to capture long-range spatial dependencies in depth estimation. Comprehensive evaluations conducted on the Oxford RobotCar, nuScenes, and Driving Stereo datasets substantiate the precision and reliability of our proposed method. Through a meticulous comparison with existing leading-edge algorithms in depth estimation, our approach demonstrates superior performance, both qualitatively and quantitatively.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.