DQ-DPS Data Partition Strategy Based on Distributed Machine Learning

Jiaming Wu, Zunhao Liu, Bowen Yang
{"title":"DQ-DPS Data Partition Strategy Based on Distributed Machine Learning","authors":"Jiaming Wu, Zunhao Liu, Bowen Yang","doi":"10.1145/3460268.3460272","DOIUrl":null,"url":null,"abstract":"With the expansion of the data scale, machine learning develops from centralized to distributed. Generally, distributed machine learning uses parameter server architecture to train in synchronous mode. At this time, data samples are statically and symmetrically allocated to each computing node according to the batch size. Each worker trains synchronously and iterates until the model converges. However, due to the different number of resources at each compute node in a mixed-load scenario, the traditional data partition strategy is usually to statically configure batch size parameters or require manual setting of batch size parameters, which makes the computational efficiency of distributed machine learning model training operations inefficient, and the data adjustment for each node will have an impact on the accuracy of the model. To solve this problem, on the premise of ensuring the accuracy of the distributed machine learning model training task, this paper proposes an optimal configuration scheme for a batch size of distributed machine learning model training task data: a data partition strategy based on distributed machine learning (DQ-DPS). DQ-DPS solves the problem of low computational efficiency caused by static data partitioning, improves the computational efficiency of distributed machine learning tasks, and ensures the accuracy of distributed machine learning training model. Through a large number of experiments, we have proved the effectiveness of DQ-DPS. Compared with the traditional data partition strategy, DQ-DPS improves the computing efficiency of each training round by 38%.","PeriodicalId":215905,"journal":{"name":"Proceedings of the 2021 2nd International Conference on Artificial Intelligence in Electronics Engineering","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 2nd International Conference on Artificial Intelligence in Electronics Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3460268.3460272","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the expansion of the data scale, machine learning develops from centralized to distributed. Generally, distributed machine learning uses parameter server architecture to train in synchronous mode. At this time, data samples are statically and symmetrically allocated to each computing node according to the batch size. Each worker trains synchronously and iterates until the model converges. However, due to the different number of resources at each compute node in a mixed-load scenario, the traditional data partition strategy is usually to statically configure batch size parameters or require manual setting of batch size parameters, which makes the computational efficiency of distributed machine learning model training operations inefficient, and the data adjustment for each node will have an impact on the accuracy of the model. To solve this problem, on the premise of ensuring the accuracy of the distributed machine learning model training task, this paper proposes an optimal configuration scheme for a batch size of distributed machine learning model training task data: a data partition strategy based on distributed machine learning (DQ-DPS). DQ-DPS solves the problem of low computational efficiency caused by static data partitioning, improves the computational efficiency of distributed machine learning tasks, and ensures the accuracy of distributed machine learning training model. Through a large number of experiments, we have proved the effectiveness of DQ-DPS. Compared with the traditional data partition strategy, DQ-DPS improves the computing efficiency of each training round by 38%.
基于分布式机器学习的DQ-DPS数据分区策略
随着数据规模的扩大,机器学习从集中式向分布式发展。分布式机器学习一般采用参数服务器架构进行同步模式的训练。此时,数据样本根据批处理大小静态对称地分配给每个计算节点。每个worker同步训练并迭代,直到模型收敛。然而,由于混合负载场景下每个计算节点上的资源数量不同,传统的数据分区策略通常是静态配置批大小参数或需要手动设置批大小参数,这使得分布式机器学习模型训练操作的计算效率低下,并且每个节点的数据调整都会对模型的准确性产生影响。为了解决这一问题,在保证分布式机器学习模型训练任务准确性的前提下,本文提出了一种批量大小的分布式机器学习模型训练任务数据的最优配置方案:基于分布式机器学习的数据分区策略(DQ-DPS)。DQ-DPS解决了静态数据分区导致的计算效率低的问题,提高了分布式机器学习任务的计算效率,保证了分布式机器学习训练模型的准确性。通过大量的实验,我们证明了DQ-DPS的有效性。与传统的数据分区策略相比,DQ-DPS每轮训练的计算效率提高38%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信