SVR-Primal Dual Method of Multipliers (PDMM) for Large-Scale Problems

Lijanshu Sinha, K. Rajawat, C. Kumar
{"title":"SVR-Primal Dual Method of Multipliers (PDMM) for Large-Scale Problems","authors":"Lijanshu Sinha, K. Rajawat, C. Kumar","doi":"10.1109/NCC48643.2020.9056014","DOIUrl":null,"url":null,"abstract":"With the advent of big data scenarios, centralized processing is no more feasible and is on the verge of getting obsolete. With this shift in paradigm, distributed processing is becoming more relevant, i.e., instead of burdening the central processor, sharing the load between the multiple processing units. The decentralization capability of the ADMM algorithm made it popular since the recent past. Another recent algorithm PDMM paved its way for distributed processing, which is still in its development state. Both the algorithms work well with the medium-scale problems, but dealing with large scale problems is still a challenging task. This work is an effort towards handling large scale data with reduced computation load. To this end, the proposed framework tries to combine the advantages of the SVRG and PDMM algorithms. The algorithm is proved to converge with rate $\\mathcal{O}(1/K$ for strongly convex loss functions, which is faster than the existing algorithms. Experimental evaluations on the real data prove the efficacy of the proposed algorithm over the state of the art methodologies.","PeriodicalId":183772,"journal":{"name":"2020 National Conference on Communications (NCC)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 National Conference on Communications (NCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCC48643.2020.9056014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the advent of big data scenarios, centralized processing is no more feasible and is on the verge of getting obsolete. With this shift in paradigm, distributed processing is becoming more relevant, i.e., instead of burdening the central processor, sharing the load between the multiple processing units. The decentralization capability of the ADMM algorithm made it popular since the recent past. Another recent algorithm PDMM paved its way for distributed processing, which is still in its development state. Both the algorithms work well with the medium-scale problems, but dealing with large scale problems is still a challenging task. This work is an effort towards handling large scale data with reduced computation load. To this end, the proposed framework tries to combine the advantages of the SVRG and PDMM algorithms. The algorithm is proved to converge with rate $\mathcal{O}(1/K$ for strongly convex loss functions, which is faster than the existing algorithms. Experimental evaluations on the real data prove the efficacy of the proposed algorithm over the state of the art methodologies.
大规模问题的svr -原始对偶乘数法
随着大数据场景的出现,集中式处理已经不再可行,并且处于过时的边缘。随着这种范式的转变,分布式处理变得更加相关,也就是说,不再增加中央处理器的负担,而是在多个处理单元之间共享负载。自最近以来,ADMM算法的去中心化能力使其受到欢迎。另一种最新算法PDMM为分布式处理铺平了道路,但仍处于发展阶段。这两种算法都能很好地处理中等规模的问题,但处理大规模问题仍然是一项具有挑战性的任务。这项工作旨在通过减少计算负荷来处理大规模数据。为此,该框架尝试结合SVRG和PDMM算法的优点。证明了该算法对强凸损失函数的收敛速度为$\mathcal{O}(1/K$),比现有算法收敛速度快。对真实数据的实验评估证明了该算法优于现有方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信