{"title":"MBDFNet:基于多尺度双向动态特征融合网络的高效图像去模糊","authors":"Zhongbao Yang, Jin-shan Pan","doi":"10.1109/icme55011.2023.00096","DOIUrl":null,"url":null,"abstract":"Existing deep image deblurring models achieve favorable results with growing model complexity. However, these models cannot be applied to those low-power devices with resource constraints (e.g., smart phones) as these models usually have lots of network parameters and require computational costs. To overcome this problem, we develop a multi-scale bidirectional dynamic feature fusion network (MBDFNet), a lightweight deep deblurring model, for efficient image deblurring. The proposed MBDFNet progressively restores multi-scale latent clear images from blurry input based on a multi-scale framework. To better utilize the features from coarse scales, we propose a bidirectional gated dynamic fusion module so that the most useful information of the features from coarse scales are kept to facilitate the estimations in the finer scales. We solve the proposed MBDFNet in an end-to-end manner and show that it has fewer network parameters and lower FLOPs values, where the FLOPs value of the proposed MBDFNet is at least 6× smaller than the state-of-the-art methods. Both quantitative and qualitative evaluations show that the proposed MBDFNet achieves favorable performance in terms of model complexity while having competitive performance in terms of accuracy against state-of-the-art methods.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MBDFNet: Multi-scale Bidirectional Dynamic Feature Fusion Network for Efficient Image Deblurring\",\"authors\":\"Zhongbao Yang, Jin-shan Pan\",\"doi\":\"10.1109/icme55011.2023.00096\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Existing deep image deblurring models achieve favorable results with growing model complexity. However, these models cannot be applied to those low-power devices with resource constraints (e.g., smart phones) as these models usually have lots of network parameters and require computational costs. To overcome this problem, we develop a multi-scale bidirectional dynamic feature fusion network (MBDFNet), a lightweight deep deblurring model, for efficient image deblurring. The proposed MBDFNet progressively restores multi-scale latent clear images from blurry input based on a multi-scale framework. To better utilize the features from coarse scales, we propose a bidirectional gated dynamic fusion module so that the most useful information of the features from coarse scales are kept to facilitate the estimations in the finer scales. We solve the proposed MBDFNet in an end-to-end manner and show that it has fewer network parameters and lower FLOPs values, where the FLOPs value of the proposed MBDFNet is at least 6× smaller than the state-of-the-art methods. Both quantitative and qualitative evaluations show that the proposed MBDFNet achieves favorable performance in terms of model complexity while having competitive performance in terms of accuracy against state-of-the-art methods.\",\"PeriodicalId\":321830,\"journal\":{\"name\":\"2023 IEEE International Conference on Multimedia and Expo (ICME)\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Multimedia and Expo (ICME)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icme55011.2023.00096\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icme55011.2023.00096","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Existing deep image deblurring models achieve favorable results with growing model complexity. However, these models cannot be applied to those low-power devices with resource constraints (e.g., smart phones) as these models usually have lots of network parameters and require computational costs. To overcome this problem, we develop a multi-scale bidirectional dynamic feature fusion network (MBDFNet), a lightweight deep deblurring model, for efficient image deblurring. The proposed MBDFNet progressively restores multi-scale latent clear images from blurry input based on a multi-scale framework. To better utilize the features from coarse scales, we propose a bidirectional gated dynamic fusion module so that the most useful information of the features from coarse scales are kept to facilitate the estimations in the finer scales. We solve the proposed MBDFNet in an end-to-end manner and show that it has fewer network parameters and lower FLOPs values, where the FLOPs value of the proposed MBDFNet is at least 6× smaller than the state-of-the-art methods. Both quantitative and qualitative evaluations show that the proposed MBDFNet achieves favorable performance in terms of model complexity while having competitive performance in terms of accuracy against state-of-the-art methods.