Yuetong Liu, Rui Zhang, Yunfeng Zhang, Yang Ning, Xunxiang Yao, Huijian Han
{"title":"单幅图像分离的递归多连接融合网络","authors":"Yuetong Liu, Rui Zhang, Yunfeng Zhang, Yang Ning, Xunxiang Yao, Huijian Han","doi":"10.1109/VCIP56404.2022.10008893","DOIUrl":null,"url":null,"abstract":"Single image deraining is an important problem in many computer vision tasks because rain streaks can severely degrade the image quality. Recently, deep convolution neural network (CNN) based single image deraining methods have been developed with encouraging performance. However, most of these algorithms are designed by stacking convolutional layers, which encounter obstacles in learning abstract feature representation effectively and can only obtain limited features in the local region. In this paper, we propose a recurrent multi-connection fusion network (RMCFN) to remove rain streaks from single images. Specifically, the RMCFN employs two key components and multiple connections to fully utilize and transfer features. Firstly, we use a multi-scale fusion memory block (MFMB) to exploit multi-scale features and obtain long-range dependencies, which is beneficial to feed useful information to a later stage. Moreover, to efficiently capture the informative features on the transmission, we fuse the features of different levels and employ a multi-connection manner to use the information within and between stages. Finally, we develop a dual attention enhancement block (DAEB) to explore the valuable channel and spatial components and only pass further useful features. Extensive experiments verify the superiority of our method in visual effect and quantitative results compared to the state-of-the-arts.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Recurrent Multi-connection Fusion Network for Single Image Deraining\",\"authors\":\"Yuetong Liu, Rui Zhang, Yunfeng Zhang, Yang Ning, Xunxiang Yao, Huijian Han\",\"doi\":\"10.1109/VCIP56404.2022.10008893\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Single image deraining is an important problem in many computer vision tasks because rain streaks can severely degrade the image quality. Recently, deep convolution neural network (CNN) based single image deraining methods have been developed with encouraging performance. However, most of these algorithms are designed by stacking convolutional layers, which encounter obstacles in learning abstract feature representation effectively and can only obtain limited features in the local region. In this paper, we propose a recurrent multi-connection fusion network (RMCFN) to remove rain streaks from single images. Specifically, the RMCFN employs two key components and multiple connections to fully utilize and transfer features. Firstly, we use a multi-scale fusion memory block (MFMB) to exploit multi-scale features and obtain long-range dependencies, which is beneficial to feed useful information to a later stage. Moreover, to efficiently capture the informative features on the transmission, we fuse the features of different levels and employ a multi-connection manner to use the information within and between stages. Finally, we develop a dual attention enhancement block (DAEB) to explore the valuable channel and spatial components and only pass further useful features. Extensive experiments verify the superiority of our method in visual effect and quantitative results compared to the state-of-the-arts.\",\"PeriodicalId\":269379,\"journal\":{\"name\":\"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)\",\"volume\":\"52 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VCIP56404.2022.10008893\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP56404.2022.10008893","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Recurrent Multi-connection Fusion Network for Single Image Deraining
Single image deraining is an important problem in many computer vision tasks because rain streaks can severely degrade the image quality. Recently, deep convolution neural network (CNN) based single image deraining methods have been developed with encouraging performance. However, most of these algorithms are designed by stacking convolutional layers, which encounter obstacles in learning abstract feature representation effectively and can only obtain limited features in the local region. In this paper, we propose a recurrent multi-connection fusion network (RMCFN) to remove rain streaks from single images. Specifically, the RMCFN employs two key components and multiple connections to fully utilize and transfer features. Firstly, we use a multi-scale fusion memory block (MFMB) to exploit multi-scale features and obtain long-range dependencies, which is beneficial to feed useful information to a later stage. Moreover, to efficiently capture the informative features on the transmission, we fuse the features of different levels and employ a multi-connection manner to use the information within and between stages. Finally, we develop a dual attention enhancement block (DAEB) to explore the valuable channel and spatial components and only pass further useful features. Extensive experiments verify the superiority of our method in visual effect and quantitative results compared to the state-of-the-arts.