分布式机器学习的快速惯性 ADMM 优化框架

IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Guozheng Wang , Dongxia Wang , Chengfan Li , Yongmei Lei
{"title":"分布式机器学习的快速惯性 ADMM 优化框架","authors":"Guozheng Wang ,&nbsp;Dongxia Wang ,&nbsp;Chengfan Li ,&nbsp;Yongmei Lei","doi":"10.1016/j.future.2024.107575","DOIUrl":null,"url":null,"abstract":"<div><div>The ADMM (Alternating Direction Method of Multipliers) optimization framework is known for its property of decomposition and assembly, which effectively bridges distributed computing and optimization algorithms, making it well-suited for distributed machine learning in the context of big data. However, it suffers from slow convergence speed and lacks the ability to coordinate worker computations, resulting in inconsistent speeds in solving subproblems in distributed systems and mutual waiting among workers. In this paper, we propose a novel optimization framework to address these challenges in support vector regression (SVR) and probit regression training through the FIADMM (<strong>F</strong>ast <strong>I</strong>nertial ADMM). The key concept of the FIADMM lies in the introduction of inertia acceleration and an adaptive subproblem iteration mechanism based on the ADMM, aimed at accelerating convergence speed and reducing the variance in solving speeds among workers. Further, we prove that FIADMM has a fast linear convergence rate <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mn>1</mn><mo>/</mo><mi>k</mi><mo>)</mo></mrow></mrow></math></span>. Experimental results on six benchmark datasets demonstrate that the proposed FIADMM significantly enhances convergence speed and computational efficiency compared to multiple baseline algorithms and related efforts.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Fast Inertial ADMM optimization framework for distributed machine learning\",\"authors\":\"Guozheng Wang ,&nbsp;Dongxia Wang ,&nbsp;Chengfan Li ,&nbsp;Yongmei Lei\",\"doi\":\"10.1016/j.future.2024.107575\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The ADMM (Alternating Direction Method of Multipliers) optimization framework is known for its property of decomposition and assembly, which effectively bridges distributed computing and optimization algorithms, making it well-suited for distributed machine learning in the context of big data. However, it suffers from slow convergence speed and lacks the ability to coordinate worker computations, resulting in inconsistent speeds in solving subproblems in distributed systems and mutual waiting among workers. In this paper, we propose a novel optimization framework to address these challenges in support vector regression (SVR) and probit regression training through the FIADMM (<strong>F</strong>ast <strong>I</strong>nertial ADMM). The key concept of the FIADMM lies in the introduction of inertia acceleration and an adaptive subproblem iteration mechanism based on the ADMM, aimed at accelerating convergence speed and reducing the variance in solving speeds among workers. Further, we prove that FIADMM has a fast linear convergence rate <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mn>1</mn><mo>/</mo><mi>k</mi><mo>)</mo></mrow></mrow></math></span>. Experimental results on six benchmark datasets demonstrate that the proposed FIADMM significantly enhances convergence speed and computational efficiency compared to multiple baseline algorithms and related efforts.</div></div>\",\"PeriodicalId\":55132,\"journal\":{\"name\":\"Future Generation Computer Systems-The International Journal of Escience\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2024-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Future Generation Computer Systems-The International Journal of Escience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167739X24005399\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X24005399","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

ADMM(交替乘法)优化框架以其分解和组装特性而著称,它有效地连接了分布式计算和优化算法,非常适合大数据背景下的分布式机器学习。然而,它存在收敛速度慢、缺乏协调工作者计算的能力等问题,导致在分布式系统中解决子问题的速度不一致,以及工作者之间的相互等待。在本文中,我们提出了一个新颖的优化框架,通过 FIADMM(快速惯性 ADMM)来解决支持向量回归(SVR)和 probit 回归训练中的这些难题。FIADMM 的关键概念在于引入惯性加速和基于 ADMM 的自适应子问题迭代机制,旨在加快收敛速度并减少工作者之间求解速度的差异。此外,我们还证明了 FIADMM 具有快速线性收敛率 O(1/k)。在六个基准数据集上的实验结果表明,与多种基准算法和相关努力相比,所提出的 FIADMM 显著提高了收敛速度和计算效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Fast Inertial ADMM optimization framework for distributed machine learning
The ADMM (Alternating Direction Method of Multipliers) optimization framework is known for its property of decomposition and assembly, which effectively bridges distributed computing and optimization algorithms, making it well-suited for distributed machine learning in the context of big data. However, it suffers from slow convergence speed and lacks the ability to coordinate worker computations, resulting in inconsistent speeds in solving subproblems in distributed systems and mutual waiting among workers. In this paper, we propose a novel optimization framework to address these challenges in support vector regression (SVR) and probit regression training through the FIADMM (Fast Inertial ADMM). The key concept of the FIADMM lies in the introduction of inertia acceleration and an adaptive subproblem iteration mechanism based on the ADMM, aimed at accelerating convergence speed and reducing the variance in solving speeds among workers. Further, we prove that FIADMM has a fast linear convergence rate O(1/k). Experimental results on six benchmark datasets demonstrate that the proposed FIADMM significantly enhances convergence speed and computational efficiency compared to multiple baseline algorithms and related efforts.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
19.90
自引率
2.70%
发文量
376
审稿时长
10.6 months
期刊介绍: Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications. Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration. Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信