{"title":"使用近似乘法器的对偶次梯度方法","authors":"Víctor Valls, D. Leith","doi":"10.1109/ALLERTON.2015.7447119","DOIUrl":null,"url":null,"abstract":"We consider the subgradient method for the dual problem in convex optimisation with approximate multipliers, i.e., the subgradient used in the update of the dual variables is obtained using an approximation of the true Lagrange multipliers. This problem is interesting for optimisation problems where the exact Lagrange multipliers might not be readily accessible. For example, in distributed optimisation the exact Lagrange multipliers might not be available at the nodes due to communication delays or losses. We show that we can construct approximate primal solutions that can get arbitrarily close to the set of optima as step size α is reduced. Applications of the analysis include unsynchronised subgradient updates in the dual variable update and unsynchronised max-weight scheduling.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"167 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Dual subgradient methods using approximate multipliers\",\"authors\":\"Víctor Valls, D. Leith\",\"doi\":\"10.1109/ALLERTON.2015.7447119\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We consider the subgradient method for the dual problem in convex optimisation with approximate multipliers, i.e., the subgradient used in the update of the dual variables is obtained using an approximation of the true Lagrange multipliers. This problem is interesting for optimisation problems where the exact Lagrange multipliers might not be readily accessible. For example, in distributed optimisation the exact Lagrange multipliers might not be available at the nodes due to communication delays or losses. We show that we can construct approximate primal solutions that can get arbitrarily close to the set of optima as step size α is reduced. Applications of the analysis include unsynchronised subgradient updates in the dual variable update and unsynchronised max-weight scheduling.\",\"PeriodicalId\":112948,\"journal\":{\"name\":\"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)\",\"volume\":\"167 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ALLERTON.2015.7447119\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ALLERTON.2015.7447119","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dual subgradient methods using approximate multipliers
We consider the subgradient method for the dual problem in convex optimisation with approximate multipliers, i.e., the subgradient used in the update of the dual variables is obtained using an approximation of the true Lagrange multipliers. This problem is interesting for optimisation problems where the exact Lagrange multipliers might not be readily accessible. For example, in distributed optimisation the exact Lagrange multipliers might not be available at the nodes due to communication delays or losses. We show that we can construct approximate primal solutions that can get arbitrarily close to the set of optima as step size α is reduced. Applications of the analysis include unsynchronised subgradient updates in the dual variable update and unsynchronised max-weight scheduling.