{"title":"Tuning-Free Online Robust Principal Component Analysis through Implicit Regularization","authors":"Lakshmi Jayalal, Gokularam Muthukrishnan, Sheetal Kalyani","doi":"arxiv-2409.07275","DOIUrl":null,"url":null,"abstract":"The performance of the standard Online Robust Principal Component Analysis\n(OR-PCA) technique depends on the optimum tuning of the explicit regularizers\nand this tuning is dataset sensitive. We aim to remove the dependency on these\ntuning parameters by using implicit regularization. We propose to use the\nimplicit regularization effect of various modified gradient descents to make\nOR-PCA tuning free. Our method incorporates three different versions of\nmodified gradient descent that separately but naturally encourage sparsity and\nlow-rank structures in the data. The proposed method performs comparable or\nbetter than the tuned OR-PCA for both simulated and real-world datasets.\nTuning-free ORPCA makes it more scalable for large datasets since we do not\nrequire dataset-dependent parameter tuning.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"203 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07275","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The performance of the standard Online Robust Principal Component Analysis
(OR-PCA) technique depends on the optimum tuning of the explicit regularizers
and this tuning is dataset sensitive. We aim to remove the dependency on these
tuning parameters by using implicit regularization. We propose to use the
implicit regularization effect of various modified gradient descents to make
OR-PCA tuning free. Our method incorporates three different versions of
modified gradient descent that separately but naturally encourage sparsity and
low-rank structures in the data. The proposed method performs comparable or
better than the tuned OR-PCA for both simulated and real-world datasets.
Tuning-free ORPCA makes it more scalable for large datasets since we do not
require dataset-dependent parameter tuning.