{"title":"ARGO: Overcoming hardware dependence in distributed learning","authors":"Karim Boubouh , Amine Boussetta , Rachid Guerraoui , Alexandre Maurer","doi":"10.1016/j.future.2025.107778","DOIUrl":null,"url":null,"abstract":"<div><div>Mobile devices offer a valuable resource for distributed learning alongside traditional computers, encouraging energy efficiency and privacy through local computations. However, the hardware limitations of these devices makes it impossible to use classical SGD for industry-grade machine learning models (with a very large number of parameters). Moreover, they are intermittently available and susceptible to failures. To address these challenges, we introduce <span>ARGO</span>, an algorithm that combines adaptive workload schemes with Byzantine resilience mechanisms, as well as dynamic device participation. Our theoretical analysis demonstrates linear convergence for strongly convex losses and sub-linear convergence for non-convex losses, without assuming specific dataset partitioning (for potential data heterogeneity). Our formal analysis highlights the interplay between convergence properties, hardware capabilities, Byzantine impact, and standard factors such as mini-batch size and learning rate. Through extensive evaluations, we show that <span>ARGO</span> outperforms standard SGD in terms of convergence speed and accuracy, and most importantly, thrives when classical SGD is not possible due to hardware limitations.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"168 ","pages":"Article 107778"},"PeriodicalIF":6.2000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X25000731","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Mobile devices offer a valuable resource for distributed learning alongside traditional computers, encouraging energy efficiency and privacy through local computations. However, the hardware limitations of these devices makes it impossible to use classical SGD for industry-grade machine learning models (with a very large number of parameters). Moreover, they are intermittently available and susceptible to failures. To address these challenges, we introduce ARGO, an algorithm that combines adaptive workload schemes with Byzantine resilience mechanisms, as well as dynamic device participation. Our theoretical analysis demonstrates linear convergence for strongly convex losses and sub-linear convergence for non-convex losses, without assuming specific dataset partitioning (for potential data heterogeneity). Our formal analysis highlights the interplay between convergence properties, hardware capabilities, Byzantine impact, and standard factors such as mini-batch size and learning rate. Through extensive evaluations, we show that ARGO outperforms standard SGD in terms of convergence speed and accuracy, and most importantly, thrives when classical SGD is not possible due to hardware limitations.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.