基于联合最小化的鲁棒主成分分析

Shuangyan Yi, Zhenyu He, Wei-Guo Yang
{"title":"基于联合最小化的鲁棒主成分分析","authors":"Shuangyan Yi, Zhenyu He, Wei-Guo Yang","doi":"10.1109/SPAC.2017.8304243","DOIUrl":null,"url":null,"abstract":"Principal Component Analysis (PCA) is the most widely used unsupervised subspace learning method, and lots of its variants have been developed. With so many proposed PCA-like methods, it is still not clear that which features are better or worse for principal components, especially when the data suffers from outliers. To this end, we propose Robust Principal Component Analysis via joint ℓ2,1-norms minimization, which provides new insights into two crucial issues of PCA: feature selection and robustness to outliers. Unlike other PCA-like methods, the proposed method is able to select effective features for reconstruction by using the ℓ2,1-norm regularization term. More specific, we first use a ℓ2,1-norm based transformation matrix to select effective features that can effectively characterize key components (e.g., the eyes and the nose in a face image), and then use an orthogonal transformation matrix to recover the original data from the selected data representation. In this way, the key components can be well recovered by using the effective features selected by a learned transformation matrix. On the other hand, we also impose ℓ2,1-norm on a loss term to select clean samples to recover its same class samples but with outliers. A simple yet effective optimization algorithm is proposed to solve the resulting optimization problem. Experiments on six datasets demonstrate the effectiveness of the proposed method.","PeriodicalId":161647,"journal":{"name":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Robust principal component analysis via joint ℓ2,1-norms minimization\",\"authors\":\"Shuangyan Yi, Zhenyu He, Wei-Guo Yang\",\"doi\":\"10.1109/SPAC.2017.8304243\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Principal Component Analysis (PCA) is the most widely used unsupervised subspace learning method, and lots of its variants have been developed. With so many proposed PCA-like methods, it is still not clear that which features are better or worse for principal components, especially when the data suffers from outliers. To this end, we propose Robust Principal Component Analysis via joint ℓ2,1-norms minimization, which provides new insights into two crucial issues of PCA: feature selection and robustness to outliers. Unlike other PCA-like methods, the proposed method is able to select effective features for reconstruction by using the ℓ2,1-norm regularization term. More specific, we first use a ℓ2,1-norm based transformation matrix to select effective features that can effectively characterize key components (e.g., the eyes and the nose in a face image), and then use an orthogonal transformation matrix to recover the original data from the selected data representation. In this way, the key components can be well recovered by using the effective features selected by a learned transformation matrix. On the other hand, we also impose ℓ2,1-norm on a loss term to select clean samples to recover its same class samples but with outliers. A simple yet effective optimization algorithm is proposed to solve the resulting optimization problem. Experiments on six datasets demonstrate the effectiveness of the proposed method.\",\"PeriodicalId\":161647,\"journal\":{\"name\":\"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SPAC.2017.8304243\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPAC.2017.8304243","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

主成分分析(PCA)是应用最广泛的无监督子空间学习方法,它的许多变体已经被开发出来。尽管有这么多类似pca的方法被提出,但对于主成分来说,哪些特征是更好的,哪些是更差的,这一点仍然不清楚,尤其是当数据受到异常值的影响时。为此,我们提出了通过联合1,2 -规范最小化的鲁棒主成分分析,这为PCA的两个关键问题:特征选择和对异常值的鲁棒性提供了新的见解。与其他类似pca的方法不同,该方法能够通过使用1,2范数正则化项来选择有效的特征进行重建。更具体地说,我们首先使用基于1,1,2范数的变换矩阵来选择能够有效表征关键成分(例如,人脸图像中的眼睛和鼻子)的有效特征,然后使用正交变换矩阵从所选择的数据表示中恢复原始数据。通过这种方法,利用学习到的变换矩阵选择的有效特征,可以很好地恢复关键成分。另一方面,我们还对损失项施加1,1,2范数以选择干净样本以恢复其具有异常值的同类样本。提出了一种简单而有效的优化算法来解决由此产生的优化问题。在6个数据集上的实验证明了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Robust principal component analysis via joint ℓ2,1-norms minimization
Principal Component Analysis (PCA) is the most widely used unsupervised subspace learning method, and lots of its variants have been developed. With so many proposed PCA-like methods, it is still not clear that which features are better or worse for principal components, especially when the data suffers from outliers. To this end, we propose Robust Principal Component Analysis via joint ℓ2,1-norms minimization, which provides new insights into two crucial issues of PCA: feature selection and robustness to outliers. Unlike other PCA-like methods, the proposed method is able to select effective features for reconstruction by using the ℓ2,1-norm regularization term. More specific, we first use a ℓ2,1-norm based transformation matrix to select effective features that can effectively characterize key components (e.g., the eyes and the nose in a face image), and then use an orthogonal transformation matrix to recover the original data from the selected data representation. In this way, the key components can be well recovered by using the effective features selected by a learned transformation matrix. On the other hand, we also impose ℓ2,1-norm on a loss term to select clean samples to recover its same class samples but with outliers. A simple yet effective optimization algorithm is proposed to solve the resulting optimization problem. Experiments on six datasets demonstrate the effectiveness of the proposed method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信