{"title":"量化下分布次梯度方法的收敛性","authors":"T. Doan, S. T. Maguluri, J. Romberg","doi":"10.1109/ALLERTON.2018.8636036","DOIUrl":null,"url":null,"abstract":"Motivated by various applications in wireless sensor networks and edge computing, we study distributed optimization problems over a network of nodes, where the goal is to optimize a global objective function composed of a sum of local functions. In these problems, due to the large scale of the network, both computation and communication must be implemented locally resulting in the need for distributed algorithms. In addition, the algorithms should be efficient enough to tolerate the limitation of computing resources, memory capacity, and communication bandwidth shared between the nodes. To cope with such limitations, we consider in this paper distributed subgradient methods under quantization. Our main contribution is to provide a sufficient condition for the sequence of quantization levels, which guarantees the convergence of distributed subgradient methods. Our results, while complementing existing results, suggest that distributed subgradient methods achieve desired convergence properties even under quantization, as long as the quantization levels become finer and finer with a proper rate. We also provide numerical simulations to compare the convergence properties of such methods with and without quantization for solving the well-known least square problems over networks.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"On the Convergence of Distributed Subgradient Methods under Quantization\",\"authors\":\"T. Doan, S. T. Maguluri, J. Romberg\",\"doi\":\"10.1109/ALLERTON.2018.8636036\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Motivated by various applications in wireless sensor networks and edge computing, we study distributed optimization problems over a network of nodes, where the goal is to optimize a global objective function composed of a sum of local functions. In these problems, due to the large scale of the network, both computation and communication must be implemented locally resulting in the need for distributed algorithms. In addition, the algorithms should be efficient enough to tolerate the limitation of computing resources, memory capacity, and communication bandwidth shared between the nodes. To cope with such limitations, we consider in this paper distributed subgradient methods under quantization. Our main contribution is to provide a sufficient condition for the sequence of quantization levels, which guarantees the convergence of distributed subgradient methods. Our results, while complementing existing results, suggest that distributed subgradient methods achieve desired convergence properties even under quantization, as long as the quantization levels become finer and finer with a proper rate. We also provide numerical simulations to compare the convergence properties of such methods with and without quantization for solving the well-known least square problems over networks.\",\"PeriodicalId\":299280,\"journal\":{\"name\":\"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ALLERTON.2018.8636036\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ALLERTON.2018.8636036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On the Convergence of Distributed Subgradient Methods under Quantization
Motivated by various applications in wireless sensor networks and edge computing, we study distributed optimization problems over a network of nodes, where the goal is to optimize a global objective function composed of a sum of local functions. In these problems, due to the large scale of the network, both computation and communication must be implemented locally resulting in the need for distributed algorithms. In addition, the algorithms should be efficient enough to tolerate the limitation of computing resources, memory capacity, and communication bandwidth shared between the nodes. To cope with such limitations, we consider in this paper distributed subgradient methods under quantization. Our main contribution is to provide a sufficient condition for the sequence of quantization levels, which guarantees the convergence of distributed subgradient methods. Our results, while complementing existing results, suggest that distributed subgradient methods achieve desired convergence properties even under quantization, as long as the quantization levels become finer and finer with a proper rate. We also provide numerical simulations to compare the convergence properties of such methods with and without quantization for solving the well-known least square problems over networks.