基于数据细分的双加权稳健主成分分析

Sisi Wang;Feiping Nie;Zheng Wang;Rong Wang;Xuelong Li
{"title":"基于数据细分的双加权稳健主成分分析","authors":"Sisi Wang;Feiping Nie;Zheng Wang;Rong Wang;Xuelong Li","doi":"10.1109/TIP.2025.3536197","DOIUrl":null,"url":null,"abstract":"Principal Component Analysis (PCA) is one of the most important unsupervised dimensionality reduction algorithms, which uses squared <inline-formula> <tex-math>$\\ell _{2}$ </tex-math></inline-formula>-norm to make it very sensitive to outliers. Those improved versions based on <inline-formula> <tex-math>$\\ell _{1}$ </tex-math></inline-formula>-norm alleviate this problem, but they have other shortcomings, such as optimization difficulties or lack of rotational invariance, etc. Besides, existing methods only vaguely divide normal samples and outliers to improve robustness, but they ignore the fact that normal samples can be more specifically divided into positive samples and hard samples, which should have different contributions to the model because positive samples are more conducive to learning the projection matrix. In this paper, we propose a novel Data Subdivision Based Dual-Weighted Robust Principal Component Analysis, namely DRPCA, which firstly designs a mark vector to distinguish normal samples and outliers, and directly removes outliers according to mark weights. Moreover, we further divide normal samples into positive samples and hard samples by self-constrained weights, and place them in relative positions, so that the weight of positive samples is larger than hard samples, which makes the projection matrix more accurate. Additionally, the optimal mean is employed to obtain a more accurate data center. To solve this problem, we carefully design an effective iterative algorithm and analyze its convergence. Experiments on real-world and RGB large-scale datasets demonstrate the superiority of our method in dimensionality reduction and anomaly detection.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1271-1284"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Data Subdivision Based Dual-Weighted Robust Principal Component Analysis\",\"authors\":\"Sisi Wang;Feiping Nie;Zheng Wang;Rong Wang;Xuelong Li\",\"doi\":\"10.1109/TIP.2025.3536197\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Principal Component Analysis (PCA) is one of the most important unsupervised dimensionality reduction algorithms, which uses squared <inline-formula> <tex-math>$\\\\ell _{2}$ </tex-math></inline-formula>-norm to make it very sensitive to outliers. Those improved versions based on <inline-formula> <tex-math>$\\\\ell _{1}$ </tex-math></inline-formula>-norm alleviate this problem, but they have other shortcomings, such as optimization difficulties or lack of rotational invariance, etc. Besides, existing methods only vaguely divide normal samples and outliers to improve robustness, but they ignore the fact that normal samples can be more specifically divided into positive samples and hard samples, which should have different contributions to the model because positive samples are more conducive to learning the projection matrix. In this paper, we propose a novel Data Subdivision Based Dual-Weighted Robust Principal Component Analysis, namely DRPCA, which firstly designs a mark vector to distinguish normal samples and outliers, and directly removes outliers according to mark weights. Moreover, we further divide normal samples into positive samples and hard samples by self-constrained weights, and place them in relative positions, so that the weight of positive samples is larger than hard samples, which makes the projection matrix more accurate. Additionally, the optimal mean is employed to obtain a more accurate data center. To solve this problem, we carefully design an effective iterative algorithm and analyze its convergence. Experiments on real-world and RGB large-scale datasets demonstrate the superiority of our method in dimensionality reduction and anomaly detection.\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":\"34 \",\"pages\":\"1271-1284\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-02-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10878426/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10878426/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

主成分分析(Principal Component Analysis, PCA)是一种重要的无监督降维算法,它使用平方范数对异常值非常敏感。基于$\ well _{1}$ -norm的改进版本缓解了这个问题,但它们有其他缺点,例如优化困难或缺乏旋转不变性等。此外,现有的方法只是模糊地划分正态样本和离群值来提高鲁棒性,而忽略了正态样本可以更具体地划分为正态样本和硬样本,正态样本对模型的贡献应该是不同的,因为正态样本更有利于学习投影矩阵。本文提出了一种新的基于数据细分的双加权鲁棒主成分分析方法,即DRPCA,该方法首先设计一个标记向量来区分正态样本和离群点,并根据标记权重直接去除离群点。此外,我们进一步通过自约束权值将正态样本划分为正态样本和硬态样本,并将其置于相对位置,使正态样本的权值大于硬态样本,从而使投影矩阵更加准确。此外,采用最优均值来获得更精确的数据中心。为了解决这一问题,我们精心设计了一种有效的迭代算法,并分析了其收敛性。在真实世界和RGB大规模数据集上的实验证明了该方法在降维和异常检测方面的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Data Subdivision Based Dual-Weighted Robust Principal Component Analysis
Principal Component Analysis (PCA) is one of the most important unsupervised dimensionality reduction algorithms, which uses squared $\ell _{2}$ -norm to make it very sensitive to outliers. Those improved versions based on $\ell _{1}$ -norm alleviate this problem, but they have other shortcomings, such as optimization difficulties or lack of rotational invariance, etc. Besides, existing methods only vaguely divide normal samples and outliers to improve robustness, but they ignore the fact that normal samples can be more specifically divided into positive samples and hard samples, which should have different contributions to the model because positive samples are more conducive to learning the projection matrix. In this paper, we propose a novel Data Subdivision Based Dual-Weighted Robust Principal Component Analysis, namely DRPCA, which firstly designs a mark vector to distinguish normal samples and outliers, and directly removes outliers according to mark weights. Moreover, we further divide normal samples into positive samples and hard samples by self-constrained weights, and place them in relative positions, so that the weight of positive samples is larger than hard samples, which makes the projection matrix more accurate. Additionally, the optimal mean is employed to obtain a more accurate data center. To solve this problem, we carefully design an effective iterative algorithm and analyze its convergence. Experiments on real-world and RGB large-scale datasets demonstrate the superiority of our method in dimensionality reduction and anomaly detection.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信