Zeyuan Allen-Zhu, Yuanzhi Li, R. Oliveira, A. Wigderson
{"title":"Much Faster Algorithms for Matrix Scaling","authors":"Zeyuan Allen-Zhu, Yuanzhi Li, R. Oliveira, A. Wigderson","doi":"10.1109/FOCS.2017.87","DOIUrl":null,"url":null,"abstract":"We develop several efficient algorithms for the classical Matrix Scaling} problem, which is used in many diverse areas, from preconditioning linear systems to approximation of the permanent. On an input n× n matrix A, this problem asks to find diagonal (scaling) matrices X and Y (if they exist), so that X A Y ε-approximates a doubly stochastic matrix, or more generally a matrix with prescribed row and column sums.We address the general scaling problem as well as some important special cases. In particular, if A has m nonzero entries, and if there exist X and Y with polynomially large entries such that X A Y is doubly stochastic, then we can solve the problem in total complexity \\tilde{O}(m + n^{4/3}). This greatly improves on the best known previous results, which were either \\tilde{O}(n^4) or O(m n^{1/2}/ε).Our algorithms are based on tailor-made first and second order techniques, combined with other recent advances in continuous optimization, which may be of independent interest for solving similar problems.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"343 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"105","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FOCS.2017.87","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 105
Abstract
We develop several efficient algorithms for the classical Matrix Scaling} problem, which is used in many diverse areas, from preconditioning linear systems to approximation of the permanent. On an input n× n matrix A, this problem asks to find diagonal (scaling) matrices X and Y (if they exist), so that X A Y ε-approximates a doubly stochastic matrix, or more generally a matrix with prescribed row and column sums.We address the general scaling problem as well as some important special cases. In particular, if A has m nonzero entries, and if there exist X and Y with polynomially large entries such that X A Y is doubly stochastic, then we can solve the problem in total complexity \tilde{O}(m + n^{4/3}). This greatly improves on the best known previous results, which were either \tilde{O}(n^4) or O(m n^{1/2}/ε).Our algorithms are based on tailor-made first and second order techniques, combined with other recent advances in continuous optimization, which may be of independent interest for solving similar problems.
我们为经典的矩阵缩放问题开发了几种有效的算法,这些算法被用于许多不同的领域,从预处理线性系统到近似永久系统。输入n×在矩阵A中,这个问题要求找到对角(缩放)矩阵X和Y(如果它们存在),使得X A Y ε-近似于一个双重随机矩阵,或者更一般地说,一个具有规定的行和和的矩阵。我们讨论了一般的标度问题以及一些重要的特殊情况。特别地,如果A有m个非零项,并且如果X和Y有多项式大的项,使得X A Y是双随机的,那么我们可以用总复杂度\tilde{O}(m + n^{4/3})来解决问题。这大大改进了以前最著名的结果,即\tilde{O}(n^4)或O(m n^{1/2}/ε)。我们的算法基于定制的一阶和二阶技术,结合了其他最近在连续优化方面的进展,这可能对解决类似问题有独立的兴趣。