Xinjie Xiao, Zhiwei Li, Wenle Ning, Nannan Zhang, Xudong Teng
{"title":"LFR-Net:用于单幅图像去雾的局部特征残差网络","authors":"Xinjie Xiao, Zhiwei Li, Wenle Ning, Nannan Zhang, Xudong Teng","doi":"10.1016/j.array.2023.100278","DOIUrl":null,"url":null,"abstract":"<div><p>Previous learning-based methods only employ clear images to train the dehazing network, but some useful information such as hazy images, media transmission maps and atmospheric light values in datasets were ignored. Here, we propose a local feature residual network (LFR-Net) for single image dehazing, which is aimed at improving the quality of dehazed images by fully utilizing the information in the training dataset. The backbone of LFR-Net is structured by feature residual block and adaptive feature fusion model. Furthermore, to preserve more details for the recovered clear images, we design an adaptive feature fusion model that adaptively fuses shallow and deep features at each scale of the encoder and decoder. Extended experiments show that the performance of our LFR-Net outperforms the state-of-the-art methods.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LFR-Net: Local feature residual network for single image dehazing\",\"authors\":\"Xinjie Xiao, Zhiwei Li, Wenle Ning, Nannan Zhang, Xudong Teng\",\"doi\":\"10.1016/j.array.2023.100278\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Previous learning-based methods only employ clear images to train the dehazing network, but some useful information such as hazy images, media transmission maps and atmospheric light values in datasets were ignored. Here, we propose a local feature residual network (LFR-Net) for single image dehazing, which is aimed at improving the quality of dehazed images by fully utilizing the information in the training dataset. The backbone of LFR-Net is structured by feature residual block and adaptive feature fusion model. Furthermore, to preserve more details for the recovered clear images, we design an adaptive feature fusion model that adaptively fuses shallow and deep features at each scale of the encoder and decoder. Extended experiments show that the performance of our LFR-Net outperforms the state-of-the-art methods.</p></div>\",\"PeriodicalId\":8417,\"journal\":{\"name\":\"Array\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2023-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Array\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2590005623000036\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005623000036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
LFR-Net: Local feature residual network for single image dehazing
Previous learning-based methods only employ clear images to train the dehazing network, but some useful information such as hazy images, media transmission maps and atmospheric light values in datasets were ignored. Here, we propose a local feature residual network (LFR-Net) for single image dehazing, which is aimed at improving the quality of dehazed images by fully utilizing the information in the training dataset. The backbone of LFR-Net is structured by feature residual block and adaptive feature fusion model. Furthermore, to preserve more details for the recovered clear images, we design an adaptive feature fusion model that adaptively fuses shallow and deep features at each scale of the encoder and decoder. Extended experiments show that the performance of our LFR-Net outperforms the state-of-the-art methods.