{"title":"First Efficient Convergence for Streaming k-PCA: A Global, Gap-Free, and Near-Optimal Rate","authors":"Zeyuan Allen-Zhu, Yuanzhi Li","doi":"10.1109/FOCS.2017.51","DOIUrl":null,"url":null,"abstract":"We study streaming principal component analysis (PCA), that is to find, in O(dk) space, the top k eigenvectors of a d× d hidden matrix \\bold \\Sigma with online vectors drawn from covariance matrix \\bold \\Sigma.We provide global convergence for Ojas algorithm which is popularly used in practice but lacks theoretical understanding for k≈1. We also provide a modified variant \\mathsf{Oja}^{++} that runs even faster than Ojas. Our results match the information theoretic lower bound in terms of dependency on error, on eigengap, on rank k, and on dimension d, up to poly-log factors. In addition, our convergence rate can be made gap-free, that is proportional to the approximation error and independent of the eigengap.In contrast, for general rank k, before our work (1) it was open to design any algorithm with efficient global convergence rate; and (2) it was open to design any algorithm with (even local) gap-free convergence rate in O(dk) space.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"90","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FOCS.2017.51","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 90
Abstract
We study streaming principal component analysis (PCA), that is to find, in O(dk) space, the top k eigenvectors of a d× d hidden matrix \bold \Sigma with online vectors drawn from covariance matrix \bold \Sigma.We provide global convergence for Ojas algorithm which is popularly used in practice but lacks theoretical understanding for k≈1. We also provide a modified variant \mathsf{Oja}^{++} that runs even faster than Ojas. Our results match the information theoretic lower bound in terms of dependency on error, on eigengap, on rank k, and on dimension d, up to poly-log factors. In addition, our convergence rate can be made gap-free, that is proportional to the approximation error and independent of the eigengap.In contrast, for general rank k, before our work (1) it was open to design any algorithm with efficient global convergence rate; and (2) it was open to design any algorithm with (even local) gap-free convergence rate in O(dk) space.