{"title":"On the Linear Convergence of Extragradient Methods for Nonconvex–Nonconcave Minimax Problems","authors":"Saeed Hajizadeh, Haihao Lu, Benjamin Grimmer","doi":"10.1287/ijoo.2022.0004","DOIUrl":null,"url":null,"abstract":"Recently, minimax optimization has received renewed focus due to modern applications in machine learning, robust optimization, and reinforcement learning. The scale of these applications naturally leads to the use of first-order methods. However, the nonconvexities and nonconcavities present in these problems, prevents the application of typical gradient descent/ascent, which is known to diverge even in bilinear problems. Recently, it was shown that the proximal point method (PPM) converges linearly for a family of nonconvex–nonconcave problems. In this paper, we study the convergence of a damped version of the extragradient method (EGM), which avoids potentially costly proximal computations, relying only on gradient evaluation. We show that the EGM converges linearly for smooth minimax optimization problems satisfying the same nonconvex–nonconcave condition needed by the PPM. Funding: H. Lu was supported by The University of Chicago Booth School of Business Benjamin Grimmer was supported by Johns Hopkins Applied Mathematics and Statistics Department.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"INFORMS journal on optimization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1287/ijoo.2022.0004","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Recently, minimax optimization has received renewed focus due to modern applications in machine learning, robust optimization, and reinforcement learning. The scale of these applications naturally leads to the use of first-order methods. However, the nonconvexities and nonconcavities present in these problems, prevents the application of typical gradient descent/ascent, which is known to diverge even in bilinear problems. Recently, it was shown that the proximal point method (PPM) converges linearly for a family of nonconvex–nonconcave problems. In this paper, we study the convergence of a damped version of the extragradient method (EGM), which avoids potentially costly proximal computations, relying only on gradient evaluation. We show that the EGM converges linearly for smooth minimax optimization problems satisfying the same nonconvex–nonconcave condition needed by the PPM. Funding: H. Lu was supported by The University of Chicago Booth School of Business Benjamin Grimmer was supported by Johns Hopkins Applied Mathematics and Statistics Department.