{"title":"基于细心盗梦模块的卷积神经网络图像增强","authors":"Purbaditya Bhattacharya, U. Zölzer","doi":"10.1109/DICTA51227.2020.9363375","DOIUrl":null,"url":null,"abstract":"In this paper, the problem of image enhancement in the form of single image superresolution and compression artifact reduction is addressed by proposing a convolutional neural network with an inception module containing an attention mechanism. The inception module in the network contains parallel branches of convolution layers employing filters with multiple receptive fields via filter dilation. The aggregated multi-scale features are subsequently filtered via an attention mechanism which allows learned feature map weighting in order to reduce redundancy. Additionally, a long skip attentive connection is also introduced in order to process the penultimate feature layer of the proposed network. Addition of the aforementioned attention modules introduce a dynamic nature to the model which would otherwise consist of static trained filters. Experiments are performed with multiple network depths and architectures in order to assess their contributions. The final network is evaluated on the benchmark datasets for the aforementioned tasks, and the results indicate a very good performance.","PeriodicalId":348164,"journal":{"name":"2020 Digital Image Computing: Techniques and Applications (DICTA)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Attentive Inception Module based Convolutional Neural Network for Image Enhancement\",\"authors\":\"Purbaditya Bhattacharya, U. Zölzer\",\"doi\":\"10.1109/DICTA51227.2020.9363375\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, the problem of image enhancement in the form of single image superresolution and compression artifact reduction is addressed by proposing a convolutional neural network with an inception module containing an attention mechanism. The inception module in the network contains parallel branches of convolution layers employing filters with multiple receptive fields via filter dilation. The aggregated multi-scale features are subsequently filtered via an attention mechanism which allows learned feature map weighting in order to reduce redundancy. Additionally, a long skip attentive connection is also introduced in order to process the penultimate feature layer of the proposed network. Addition of the aforementioned attention modules introduce a dynamic nature to the model which would otherwise consist of static trained filters. Experiments are performed with multiple network depths and architectures in order to assess their contributions. The final network is evaluated on the benchmark datasets for the aforementioned tasks, and the results indicate a very good performance.\",\"PeriodicalId\":348164,\"journal\":{\"name\":\"2020 Digital Image Computing: Techniques and Applications (DICTA)\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 Digital Image Computing: Techniques and Applications (DICTA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DICTA51227.2020.9363375\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA51227.2020.9363375","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Attentive Inception Module based Convolutional Neural Network for Image Enhancement
In this paper, the problem of image enhancement in the form of single image superresolution and compression artifact reduction is addressed by proposing a convolutional neural network with an inception module containing an attention mechanism. The inception module in the network contains parallel branches of convolution layers employing filters with multiple receptive fields via filter dilation. The aggregated multi-scale features are subsequently filtered via an attention mechanism which allows learned feature map weighting in order to reduce redundancy. Additionally, a long skip attentive connection is also introduced in order to process the penultimate feature layer of the proposed network. Addition of the aforementioned attention modules introduce a dynamic nature to the model which would otherwise consist of static trained filters. Experiments are performed with multiple network depths and architectures in order to assess their contributions. The final network is evaluated on the benchmark datasets for the aforementioned tasks, and the results indicate a very good performance.