分布式机器学习的联合核心集构建与量化

Hanlin Lu, Changchang Liu, Shiqiang Wang, T. He, V. Narayanan, Kevin S. Chan, Stephen Pasteris
{"title":"分布式机器学习的联合核心集构建与量化","authors":"Hanlin Lu, Changchang Liu, Shiqiang Wang, T. He, V. Narayanan, Kevin S. Chan, Stephen Pasteris","doi":"10.48550/arXiv.2204.06652","DOIUrl":null,"url":null,"abstract":"Coresets are small, weighted summaries of larger datasets, aiming at providing provable error bounds for machine learning (ML) tasks while significantly reducing the communication and computation costs. To achieve a better trade-off between ML error bounds and costs, we propose the first framework to incorporate quantization techniques into the process of coreset construction. Specifically, we theoretically analyze the ML error bounds caused by a combination of coreset construction and quantization. Based on that, we formulate an optimization problem to minimize the ML error under a fixed budget of communication cost. To improve the scalability for large datasets, we identify two proxies of the original objective function, for which efficient algorithms are developed. For the case of data on multiple nodes, we further design a novel algorithm to allocate the communication budget to the nodes while minimizing the overall ML error. Through extensive experiments on multiple real-world datasets, we demonstrate the effectiveness and efficiency of our proposed algorithms for a variety of ML tasks. In particular, our algorithms have achieved more than 90% data reduction with less than 10% degradation in ML performance in most cases.","PeriodicalId":231191,"journal":{"name":"2020 IFIP Networking Conference (Networking)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Joint Coreset Construction and Quantization for Distributed Machine Learning\",\"authors\":\"Hanlin Lu, Changchang Liu, Shiqiang Wang, T. He, V. Narayanan, Kevin S. Chan, Stephen Pasteris\",\"doi\":\"10.48550/arXiv.2204.06652\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Coresets are small, weighted summaries of larger datasets, aiming at providing provable error bounds for machine learning (ML) tasks while significantly reducing the communication and computation costs. To achieve a better trade-off between ML error bounds and costs, we propose the first framework to incorporate quantization techniques into the process of coreset construction. Specifically, we theoretically analyze the ML error bounds caused by a combination of coreset construction and quantization. Based on that, we formulate an optimization problem to minimize the ML error under a fixed budget of communication cost. To improve the scalability for large datasets, we identify two proxies of the original objective function, for which efficient algorithms are developed. For the case of data on multiple nodes, we further design a novel algorithm to allocate the communication budget to the nodes while minimizing the overall ML error. Through extensive experiments on multiple real-world datasets, we demonstrate the effectiveness and efficiency of our proposed algorithms for a variety of ML tasks. In particular, our algorithms have achieved more than 90% data reduction with less than 10% degradation in ML performance in most cases.\",\"PeriodicalId\":231191,\"journal\":{\"name\":\"2020 IFIP Networking Conference (Networking)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IFIP Networking Conference (Networking)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2204.06652\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IFIP Networking Conference (Networking)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2204.06652","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

核心集是大型数据集的小型加权摘要,旨在为机器学习(ML)任务提供可证明的误差界限,同时显着降低通信和计算成本。为了在机器学习错误边界和成本之间实现更好的权衡,我们提出了第一个将量化技术纳入核心集构建过程的框架。具体来说,我们从理论上分析了由核集构建和量化相结合引起的机器学习误差范围。在此基础上,提出了在固定通信成本预算下最小化机器学习误差的优化问题。为了提高大数据集的可扩展性,我们确定了原始目标函数的两个代理,并为此开发了有效的算法。对于数据在多个节点上的情况,我们进一步设计了一种新的算法来分配通信预算给节点,同时最小化总体ML误差。通过对多个真实世界数据集的广泛实验,我们证明了我们提出的算法在各种ML任务中的有效性和效率。特别是,我们的算法在大多数情况下实现了超过90%的数据减少,ML性能下降不到10%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Joint Coreset Construction and Quantization for Distributed Machine Learning
Coresets are small, weighted summaries of larger datasets, aiming at providing provable error bounds for machine learning (ML) tasks while significantly reducing the communication and computation costs. To achieve a better trade-off between ML error bounds and costs, we propose the first framework to incorporate quantization techniques into the process of coreset construction. Specifically, we theoretically analyze the ML error bounds caused by a combination of coreset construction and quantization. Based on that, we formulate an optimization problem to minimize the ML error under a fixed budget of communication cost. To improve the scalability for large datasets, we identify two proxies of the original objective function, for which efficient algorithms are developed. For the case of data on multiple nodes, we further design a novel algorithm to allocate the communication budget to the nodes while minimizing the overall ML error. Through extensive experiments on multiple real-world datasets, we demonstrate the effectiveness and efficiency of our proposed algorithms for a variety of ML tasks. In particular, our algorithms have achieved more than 90% data reduction with less than 10% degradation in ML performance in most cases.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信