基于细心盗梦模块的卷积神经网络图像增强

Purbaditya Bhattacharya, U. Zölzer
{"title":"基于细心盗梦模块的卷积神经网络图像增强","authors":"Purbaditya Bhattacharya, U. Zölzer","doi":"10.1109/DICTA51227.2020.9363375","DOIUrl":null,"url":null,"abstract":"In this paper, the problem of image enhancement in the form of single image superresolution and compression artifact reduction is addressed by proposing a convolutional neural network with an inception module containing an attention mechanism. The inception module in the network contains parallel branches of convolution layers employing filters with multiple receptive fields via filter dilation. The aggregated multi-scale features are subsequently filtered via an attention mechanism which allows learned feature map weighting in order to reduce redundancy. Additionally, a long skip attentive connection is also introduced in order to process the penultimate feature layer of the proposed network. Addition of the aforementioned attention modules introduce a dynamic nature to the model which would otherwise consist of static trained filters. Experiments are performed with multiple network depths and architectures in order to assess their contributions. The final network is evaluated on the benchmark datasets for the aforementioned tasks, and the results indicate a very good performance.","PeriodicalId":348164,"journal":{"name":"2020 Digital Image Computing: Techniques and Applications (DICTA)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Attentive Inception Module based Convolutional Neural Network for Image Enhancement\",\"authors\":\"Purbaditya Bhattacharya, U. Zölzer\",\"doi\":\"10.1109/DICTA51227.2020.9363375\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, the problem of image enhancement in the form of single image superresolution and compression artifact reduction is addressed by proposing a convolutional neural network with an inception module containing an attention mechanism. The inception module in the network contains parallel branches of convolution layers employing filters with multiple receptive fields via filter dilation. The aggregated multi-scale features are subsequently filtered via an attention mechanism which allows learned feature map weighting in order to reduce redundancy. Additionally, a long skip attentive connection is also introduced in order to process the penultimate feature layer of the proposed network. Addition of the aforementioned attention modules introduce a dynamic nature to the model which would otherwise consist of static trained filters. Experiments are performed with multiple network depths and architectures in order to assess their contributions. The final network is evaluated on the benchmark datasets for the aforementioned tasks, and the results indicate a very good performance.\",\"PeriodicalId\":348164,\"journal\":{\"name\":\"2020 Digital Image Computing: Techniques and Applications (DICTA)\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 Digital Image Computing: Techniques and Applications (DICTA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DICTA51227.2020.9363375\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA51227.2020.9363375","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

本文提出了一种包含注意机制的初始模块的卷积神经网络,解决了单幅图像超分辨率和压缩伪像减少的图像增强问题。网络中的初始模块包含卷积层的并行分支,该卷积层采用具有多个接受域的滤波器,通过滤波器扩张。随后通过注意机制过滤聚合的多尺度特征,该机制允许学习到的特征映射加权以减少冗余。此外,为了处理所提出的网络的倒数第二特征层,还引入了长跳细心连接。上述注意模块的添加为模型引入了动态特性,否则将由静态训练过滤器组成。实验采用了多种网络深度和架构,以评估它们的贡献。最后的网络在上述任务的基准数据集上进行了评估,结果表明性能非常好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Attentive Inception Module based Convolutional Neural Network for Image Enhancement
In this paper, the problem of image enhancement in the form of single image superresolution and compression artifact reduction is addressed by proposing a convolutional neural network with an inception module containing an attention mechanism. The inception module in the network contains parallel branches of convolution layers employing filters with multiple receptive fields via filter dilation. The aggregated multi-scale features are subsequently filtered via an attention mechanism which allows learned feature map weighting in order to reduce redundancy. Additionally, a long skip attentive connection is also introduced in order to process the penultimate feature layer of the proposed network. Addition of the aforementioned attention modules introduce a dynamic nature to the model which would otherwise consist of static trained filters. Experiments are performed with multiple network depths and architectures in order to assess their contributions. The final network is evaluated on the benchmark datasets for the aforementioned tasks, and the results indicate a very good performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信