利用l1范数最大化避免最优均值鲁棒稀疏BPCA

Ganyi Tang Ganyi Tang, Lili Fan Ganyi Tang, Jianguo Shi Lili Fan, Jingjing Tan Jianguo Shi, Guifu Lu Jingjing Tan
{"title":"利用l1范数最大化避免最优均值鲁棒稀疏BPCA","authors":"Ganyi Tang Ganyi Tang, Lili Fan Ganyi Tang, Jianguo Shi Lili Fan, Jingjing Tan Jianguo Shi, Guifu Lu Jingjing Tan","doi":"10.53106/160792642023072404016","DOIUrl":null,"url":null,"abstract":"\n Recently, the robust PCA/2DPCA methods have achieved great success in subspace learning. Nevertheless, most of them have a basic premise that the average of samples is zero and the optimal mean is the center of the data. Actually, this premise only applies to PCA/2DPCA methods based on L2-norm. The robust PCA/2DPCA method with L1-norm has an optimal mean deviate from zero, and the estimation of the optimal mean leads to an expensive calculation. Another shortcoming of PCA/2DPCA is that it does not pay enough attention to the instinct correlation within the part of data. To tackle these issues, we introduce the maximum variance of samples’ difference into Block principal component analysis (BPCA) and propose a robust method for avoiding the optimal mean to extract orthonormal features. BPCA, which is generalized from PCA and 2DPCA, is a general PCA/2DPCA framework specialized in part learning, can makes better use of the partial correlation. However, projection features without sparsity not only have higher computational complexity, but also lack semantic properties. We integrate the elastic network into avoiding optimal mean robust BPCA to perform sparse constraints on projection features. These two BPCA methods (non-sparse and sparse) make the presumption of zero-mean data unnecessary and avoid optimal mean calculation. Experiments on reference benchmark databases indicate the usefulness of the proposed two methods in image classification and image reconstruction.\n \n","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Avoiding Optimal Mean Robust and Sparse BPCA with L1-norm Maximization\",\"authors\":\"Ganyi Tang Ganyi Tang, Lili Fan Ganyi Tang, Jianguo Shi Lili Fan, Jingjing Tan Jianguo Shi, Guifu Lu Jingjing Tan\",\"doi\":\"10.53106/160792642023072404016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Recently, the robust PCA/2DPCA methods have achieved great success in subspace learning. Nevertheless, most of them have a basic premise that the average of samples is zero and the optimal mean is the center of the data. Actually, this premise only applies to PCA/2DPCA methods based on L2-norm. The robust PCA/2DPCA method with L1-norm has an optimal mean deviate from zero, and the estimation of the optimal mean leads to an expensive calculation. Another shortcoming of PCA/2DPCA is that it does not pay enough attention to the instinct correlation within the part of data. To tackle these issues, we introduce the maximum variance of samples’ difference into Block principal component analysis (BPCA) and propose a robust method for avoiding the optimal mean to extract orthonormal features. BPCA, which is generalized from PCA and 2DPCA, is a general PCA/2DPCA framework specialized in part learning, can makes better use of the partial correlation. However, projection features without sparsity not only have higher computational complexity, but also lack semantic properties. We integrate the elastic network into avoiding optimal mean robust BPCA to perform sparse constraints on projection features. These two BPCA methods (non-sparse and sparse) make the presumption of zero-mean data unnecessary and avoid optimal mean calculation. Experiments on reference benchmark databases indicate the usefulness of the proposed two methods in image classification and image reconstruction.\\n \\n\",\"PeriodicalId\":442331,\"journal\":{\"name\":\"網際網路技術學刊\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"網際網路技術學刊\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.53106/160792642023072404016\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"網際網路技術學刊","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.53106/160792642023072404016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,鲁棒PCA/2DPCA方法在子空间学习方面取得了很大的成功。然而,大多数方法都有一个基本的前提,即样本的平均值为零,最优均值是数据的中心。实际上,这个前提只适用于基于l2范数的PCA/2DPCA方法。具有l1范数的稳健PCA/2DPCA方法具有离零的最优均值,最优均值的估计导致计算成本高。PCA/2DPCA的另一个缺点是对部分数据内部的本能相关性重视不够。为了解决这些问题,我们将样本差异的最大方差引入到块主成分分析(BPCA)中,并提出了一种鲁棒的方法来避免最优均值来提取正交特征。BPCA是在PCA和2DPCA的基础上推广而来的,是一种专门研究部分学习的通用PCA/2DPCA框架,可以更好地利用偏相关性。然而,没有稀疏性的投影特征不仅具有较高的计算复杂度,而且缺乏语义属性。我们将弹性网络整合到避免最优均值鲁棒BPCA中,对投影特征执行稀疏约束。这两种BPCA方法(非稀疏和稀疏)无需假设数据为零均值,避免了最优均值计算。在参考基准数据库上的实验表明了这两种方法在图像分类和图像重建方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Avoiding Optimal Mean Robust and Sparse BPCA with L1-norm Maximization
Recently, the robust PCA/2DPCA methods have achieved great success in subspace learning. Nevertheless, most of them have a basic premise that the average of samples is zero and the optimal mean is the center of the data. Actually, this premise only applies to PCA/2DPCA methods based on L2-norm. The robust PCA/2DPCA method with L1-norm has an optimal mean deviate from zero, and the estimation of the optimal mean leads to an expensive calculation. Another shortcoming of PCA/2DPCA is that it does not pay enough attention to the instinct correlation within the part of data. To tackle these issues, we introduce the maximum variance of samples’ difference into Block principal component analysis (BPCA) and propose a robust method for avoiding the optimal mean to extract orthonormal features. BPCA, which is generalized from PCA and 2DPCA, is a general PCA/2DPCA framework specialized in part learning, can makes better use of the partial correlation. However, projection features without sparsity not only have higher computational complexity, but also lack semantic properties. We integrate the elastic network into avoiding optimal mean robust BPCA to perform sparse constraints on projection features. These two BPCA methods (non-sparse and sparse) make the presumption of zero-mean data unnecessary and avoid optimal mean calculation. Experiments on reference benchmark databases indicate the usefulness of the proposed two methods in image classification and image reconstruction.  
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信