{"title":"大规模资源分配问题的随机坐标下降法","authors":"I. Necoara","doi":"10.1109/CDC.2012.6426370","DOIUrl":null,"url":null,"abstract":"In this paper we develop a randomized (block) coordinate descent method for solving singly linear equality constrained optimization problems that appear for example in resource allocation over networks. We show that for strongly convex objective functions the new algorithm has an expected linear convergence rate that depends on the second smallest eigenvalue λ2(Q) of a matrix Q that is defined in terms of the probabilities and the number of blocks. However, the computational complexity per iteration of our method is much simpler than of a method based on full gradient information. We also focus on how to choose the probabilities to make this randomized algorithm to converge as fast as possible and we arrive at solving a sparse SDP. Finally, we present some numerical results for our method that show its efficiency on huge sparse problems.","PeriodicalId":312426,"journal":{"name":"2012 IEEE 51st IEEE Conference on Decision and Control (CDC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"A random coordinate descent method for large-scale resource allocation problems\",\"authors\":\"I. Necoara\",\"doi\":\"10.1109/CDC.2012.6426370\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we develop a randomized (block) coordinate descent method for solving singly linear equality constrained optimization problems that appear for example in resource allocation over networks. We show that for strongly convex objective functions the new algorithm has an expected linear convergence rate that depends on the second smallest eigenvalue λ2(Q) of a matrix Q that is defined in terms of the probabilities and the number of blocks. However, the computational complexity per iteration of our method is much simpler than of a method based on full gradient information. We also focus on how to choose the probabilities to make this randomized algorithm to converge as fast as possible and we arrive at solving a sparse SDP. Finally, we present some numerical results for our method that show its efficiency on huge sparse problems.\",\"PeriodicalId\":312426,\"journal\":{\"name\":\"2012 IEEE 51st IEEE Conference on Decision and Control (CDC)\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE 51st IEEE Conference on Decision and Control (CDC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CDC.2012.6426370\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE 51st IEEE Conference on Decision and Control (CDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CDC.2012.6426370","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A random coordinate descent method for large-scale resource allocation problems
In this paper we develop a randomized (block) coordinate descent method for solving singly linear equality constrained optimization problems that appear for example in resource allocation over networks. We show that for strongly convex objective functions the new algorithm has an expected linear convergence rate that depends on the second smallest eigenvalue λ2(Q) of a matrix Q that is defined in terms of the probabilities and the number of blocks. However, the computational complexity per iteration of our method is much simpler than of a method based on full gradient information. We also focus on how to choose the probabilities to make this randomized algorithm to converge as fast as possible and we arrive at solving a sparse SDP. Finally, we present some numerical results for our method that show its efficiency on huge sparse problems.