{"title":"SVR-Primal Dual Method of Multipliers (PDMM) for Large-Scale Problems","authors":"Lijanshu Sinha, K. Rajawat, C. Kumar","doi":"10.1109/NCC48643.2020.9056014","DOIUrl":null,"url":null,"abstract":"With the advent of big data scenarios, centralized processing is no more feasible and is on the verge of getting obsolete. With this shift in paradigm, distributed processing is becoming more relevant, i.e., instead of burdening the central processor, sharing the load between the multiple processing units. The decentralization capability of the ADMM algorithm made it popular since the recent past. Another recent algorithm PDMM paved its way for distributed processing, which is still in its development state. Both the algorithms work well with the medium-scale problems, but dealing with large scale problems is still a challenging task. This work is an effort towards handling large scale data with reduced computation load. To this end, the proposed framework tries to combine the advantages of the SVRG and PDMM algorithms. The algorithm is proved to converge with rate $\\mathcal{O}(1/K$ for strongly convex loss functions, which is faster than the existing algorithms. Experimental evaluations on the real data prove the efficacy of the proposed algorithm over the state of the art methodologies.","PeriodicalId":183772,"journal":{"name":"2020 National Conference on Communications (NCC)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 National Conference on Communications (NCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCC48643.2020.9056014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the advent of big data scenarios, centralized processing is no more feasible and is on the verge of getting obsolete. With this shift in paradigm, distributed processing is becoming more relevant, i.e., instead of burdening the central processor, sharing the load between the multiple processing units. The decentralization capability of the ADMM algorithm made it popular since the recent past. Another recent algorithm PDMM paved its way for distributed processing, which is still in its development state. Both the algorithms work well with the medium-scale problems, but dealing with large scale problems is still a challenging task. This work is an effort towards handling large scale data with reduced computation load. To this end, the proposed framework tries to combine the advantages of the SVRG and PDMM algorithms. The algorithm is proved to converge with rate $\mathcal{O}(1/K$ for strongly convex loss functions, which is faster than the existing algorithms. Experimental evaluations on the real data prove the efficacy of the proposed algorithm over the state of the art methodologies.