cnn的鲁棒低秩深度特征恢复:面向低信息丢失和快速收敛

Jiahuan Ren, Zhao Zhang, Jicong Fan, Haijun Zhang, Mingliang Xu, Meng Wang
{"title":"cnn的鲁棒低秩深度特征恢复:面向低信息丢失和快速收敛","authors":"Jiahuan Ren, Zhao Zhang, Jicong Fan, Haijun Zhang, Mingliang Xu, Meng Wang","doi":"10.1109/ICDM51629.2021.00064","DOIUrl":null,"url":null,"abstract":"Convolutional Neural Networks (CNNs)-guided deep models have obtained impressive performance for image representation, however the representation ability may still be restricted and usually needs more epochs to make the model converge in training, due to the useful information loss during the convolution and pooling operations. We therefore propose a general feature recovery layer, termed Low-rank Deep Feature Recovery (LDFR), to enhance the representation ability of the convolutional features by seamlessly integrating low-rank recovery into CNNs, which can be easily extended to all existing CNNs-based models. To be specific, to recover the lost information during the convolution operation, LDFR aims at learning the low-rank projections to embed the feature maps onto a low-rank subspace based on some selected informative convolutional feature maps. Such low-rank recovery operation can ensure all convolutional feature maps to be reconstructed easily to recover the underlying subspace with more useful and detailed information discovered, e.g., the strokes of characters or the texture information of clothes can be enhanced after LDFR. In addition, to make the learnt low-rank subspaces more powerful for feature recovery, we design a fusion strategy to obtain a generalized subspace, which averages over all learnt sub-spaces in each LDFR layer, so that the convolutional feature maps in test phase can be recovered effectively via low-rank embedding. Extensive results on several image datasets show that existing CNNs-based models equipped with our LDFR layer can obtain better performance.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Robust Low-rank Deep Feature Recovery in CNNs: Toward Low Information Loss and Fast Convergence\",\"authors\":\"Jiahuan Ren, Zhao Zhang, Jicong Fan, Haijun Zhang, Mingliang Xu, Meng Wang\",\"doi\":\"10.1109/ICDM51629.2021.00064\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional Neural Networks (CNNs)-guided deep models have obtained impressive performance for image representation, however the representation ability may still be restricted and usually needs more epochs to make the model converge in training, due to the useful information loss during the convolution and pooling operations. We therefore propose a general feature recovery layer, termed Low-rank Deep Feature Recovery (LDFR), to enhance the representation ability of the convolutional features by seamlessly integrating low-rank recovery into CNNs, which can be easily extended to all existing CNNs-based models. To be specific, to recover the lost information during the convolution operation, LDFR aims at learning the low-rank projections to embed the feature maps onto a low-rank subspace based on some selected informative convolutional feature maps. Such low-rank recovery operation can ensure all convolutional feature maps to be reconstructed easily to recover the underlying subspace with more useful and detailed information discovered, e.g., the strokes of characters or the texture information of clothes can be enhanced after LDFR. In addition, to make the learnt low-rank subspaces more powerful for feature recovery, we design a fusion strategy to obtain a generalized subspace, which averages over all learnt sub-spaces in each LDFR layer, so that the convolutional feature maps in test phase can be recovered effectively via low-rank embedding. Extensive results on several image datasets show that existing CNNs-based models equipped with our LDFR layer can obtain better performance.\",\"PeriodicalId\":320970,\"journal\":{\"name\":\"2021 IEEE International Conference on Data Mining (ICDM)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Data Mining (ICDM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDM51629.2021.00064\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Data Mining (ICDM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDM51629.2021.00064","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

卷积神经网络(Convolutional Neural Networks, cnn)引导下的深度模型在图像表示方面取得了令人瞩目的成绩,但由于卷积和池化过程中有用信息的丢失,其表示能力仍然受到限制,通常需要更多的epoch才能使模型在训练中收敛。因此,我们提出了一个通用的特征恢复层,称为低秩深度特征恢复(LDFR),通过无缝地将低秩恢复集成到cnn中来增强卷积特征的表示能力,这可以很容易地扩展到所有现有的基于cnn的模型中。具体来说,为了恢复卷积运算过程中丢失的信息,LDFR的目的是学习低秩投影,并根据选择的一些信息卷积特征映射将特征映射嵌入到低秩子空间中。这种低秩恢复操作可以保证所有的卷积特征映射都可以很容易地重建,从而恢复底层的子空间,并发现更多有用和详细的信息,例如,经过LDFR后,字符的笔画或衣服的纹理信息都可以得到增强。此外,为了使学习到的低秩子空间具有更强的特征恢复能力,我们设计了一种融合策略来获得广义子空间,该子空间对每个LDFR层中所有学习到的子空间进行平均,从而通过低秩嵌入有效地恢复测试阶段的卷积特征映射。在多个图像数据集上的大量结果表明,配备我们的LDFR层的现有基于cnn的模型可以获得更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Robust Low-rank Deep Feature Recovery in CNNs: Toward Low Information Loss and Fast Convergence
Convolutional Neural Networks (CNNs)-guided deep models have obtained impressive performance for image representation, however the representation ability may still be restricted and usually needs more epochs to make the model converge in training, due to the useful information loss during the convolution and pooling operations. We therefore propose a general feature recovery layer, termed Low-rank Deep Feature Recovery (LDFR), to enhance the representation ability of the convolutional features by seamlessly integrating low-rank recovery into CNNs, which can be easily extended to all existing CNNs-based models. To be specific, to recover the lost information during the convolution operation, LDFR aims at learning the low-rank projections to embed the feature maps onto a low-rank subspace based on some selected informative convolutional feature maps. Such low-rank recovery operation can ensure all convolutional feature maps to be reconstructed easily to recover the underlying subspace with more useful and detailed information discovered, e.g., the strokes of characters or the texture information of clothes can be enhanced after LDFR. In addition, to make the learnt low-rank subspaces more powerful for feature recovery, we design a fusion strategy to obtain a generalized subspace, which averages over all learnt sub-spaces in each LDFR layer, so that the convolutional feature maps in test phase can be recovered effectively via low-rank embedding. Extensive results on several image datasets show that existing CNNs-based models equipped with our LDFR layer can obtain better performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信