{"title":"cnn的鲁棒低秩深度特征恢复:面向低信息丢失和快速收敛","authors":"Jiahuan Ren, Zhao Zhang, Jicong Fan, Haijun Zhang, Mingliang Xu, Meng Wang","doi":"10.1109/ICDM51629.2021.00064","DOIUrl":null,"url":null,"abstract":"Convolutional Neural Networks (CNNs)-guided deep models have obtained impressive performance for image representation, however the representation ability may still be restricted and usually needs more epochs to make the model converge in training, due to the useful information loss during the convolution and pooling operations. We therefore propose a general feature recovery layer, termed Low-rank Deep Feature Recovery (LDFR), to enhance the representation ability of the convolutional features by seamlessly integrating low-rank recovery into CNNs, which can be easily extended to all existing CNNs-based models. To be specific, to recover the lost information during the convolution operation, LDFR aims at learning the low-rank projections to embed the feature maps onto a low-rank subspace based on some selected informative convolutional feature maps. Such low-rank recovery operation can ensure all convolutional feature maps to be reconstructed easily to recover the underlying subspace with more useful and detailed information discovered, e.g., the strokes of characters or the texture information of clothes can be enhanced after LDFR. In addition, to make the learnt low-rank subspaces more powerful for feature recovery, we design a fusion strategy to obtain a generalized subspace, which averages over all learnt sub-spaces in each LDFR layer, so that the convolutional feature maps in test phase can be recovered effectively via low-rank embedding. Extensive results on several image datasets show that existing CNNs-based models equipped with our LDFR layer can obtain better performance.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Robust Low-rank Deep Feature Recovery in CNNs: Toward Low Information Loss and Fast Convergence\",\"authors\":\"Jiahuan Ren, Zhao Zhang, Jicong Fan, Haijun Zhang, Mingliang Xu, Meng Wang\",\"doi\":\"10.1109/ICDM51629.2021.00064\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional Neural Networks (CNNs)-guided deep models have obtained impressive performance for image representation, however the representation ability may still be restricted and usually needs more epochs to make the model converge in training, due to the useful information loss during the convolution and pooling operations. We therefore propose a general feature recovery layer, termed Low-rank Deep Feature Recovery (LDFR), to enhance the representation ability of the convolutional features by seamlessly integrating low-rank recovery into CNNs, which can be easily extended to all existing CNNs-based models. To be specific, to recover the lost information during the convolution operation, LDFR aims at learning the low-rank projections to embed the feature maps onto a low-rank subspace based on some selected informative convolutional feature maps. Such low-rank recovery operation can ensure all convolutional feature maps to be reconstructed easily to recover the underlying subspace with more useful and detailed information discovered, e.g., the strokes of characters or the texture information of clothes can be enhanced after LDFR. In addition, to make the learnt low-rank subspaces more powerful for feature recovery, we design a fusion strategy to obtain a generalized subspace, which averages over all learnt sub-spaces in each LDFR layer, so that the convolutional feature maps in test phase can be recovered effectively via low-rank embedding. Extensive results on several image datasets show that existing CNNs-based models equipped with our LDFR layer can obtain better performance.\",\"PeriodicalId\":320970,\"journal\":{\"name\":\"2021 IEEE International Conference on Data Mining (ICDM)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Data Mining (ICDM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDM51629.2021.00064\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Data Mining (ICDM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDM51629.2021.00064","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust Low-rank Deep Feature Recovery in CNNs: Toward Low Information Loss and Fast Convergence
Convolutional Neural Networks (CNNs)-guided deep models have obtained impressive performance for image representation, however the representation ability may still be restricted and usually needs more epochs to make the model converge in training, due to the useful information loss during the convolution and pooling operations. We therefore propose a general feature recovery layer, termed Low-rank Deep Feature Recovery (LDFR), to enhance the representation ability of the convolutional features by seamlessly integrating low-rank recovery into CNNs, which can be easily extended to all existing CNNs-based models. To be specific, to recover the lost information during the convolution operation, LDFR aims at learning the low-rank projections to embed the feature maps onto a low-rank subspace based on some selected informative convolutional feature maps. Such low-rank recovery operation can ensure all convolutional feature maps to be reconstructed easily to recover the underlying subspace with more useful and detailed information discovered, e.g., the strokes of characters or the texture information of clothes can be enhanced after LDFR. In addition, to make the learnt low-rank subspaces more powerful for feature recovery, we design a fusion strategy to obtain a generalized subspace, which averages over all learnt sub-spaces in each LDFR layer, so that the convolutional feature maps in test phase can be recovered effectively via low-rank embedding. Extensive results on several image datasets show that existing CNNs-based models equipped with our LDFR layer can obtain better performance.