Yalan Wu;Jiaquan Yan;Long Chen;Jigang Wu;Yidong Li
{"title":"Efficient Parameter Server Placement for Distributed Deep Learning in Edge Computing","authors":"Yalan Wu;Jiaquan Yan;Long Chen;Jigang Wu;Yidong Li","doi":"10.1093/comjnl/bxab188","DOIUrl":null,"url":null,"abstract":"Parameter servers (PSs) placement is one of the most important factors for global model training on distributed deep learning. This paper formulates a novel problem for placement strategy of PSs in the dynamic available storage capacity, with the objective of minimizing the training time of the distributed deep learning under the constraints of storage capacity and the number of local PSs. Then, we provide the proof for the NP-hardness of the proposed problem. The whole training epochs are divided into two parts, i.e. the first epoch and the other epochs. For the first epoch, an approximation algorithm and a rounding algorithm are proposed in this paper, to solve the proposed problem. For the other epochs, an adjustment algorithm is proposed, by continuously adjusting the decisions for placement strategy of PSs to decrease the training time of the global model. Simulation results show that the proposed approximation algorithm and rounding algorithm perform better than existing works for all cases, in terms of the training time of global model. Meanwhile, the training time of global model for the proposed approximation algorithm is very close to that for optimal solution generated by the brute-force approach for all cases. Besides, the integrated algorithm outperforms the existing works when the available storage capacity varies during the training.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"66 3","pages":"678-691"},"PeriodicalIF":1.5000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10084427/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Parameter servers (PSs) placement is one of the most important factors for global model training on distributed deep learning. This paper formulates a novel problem for placement strategy of PSs in the dynamic available storage capacity, with the objective of minimizing the training time of the distributed deep learning under the constraints of storage capacity and the number of local PSs. Then, we provide the proof for the NP-hardness of the proposed problem. The whole training epochs are divided into two parts, i.e. the first epoch and the other epochs. For the first epoch, an approximation algorithm and a rounding algorithm are proposed in this paper, to solve the proposed problem. For the other epochs, an adjustment algorithm is proposed, by continuously adjusting the decisions for placement strategy of PSs to decrease the training time of the global model. Simulation results show that the proposed approximation algorithm and rounding algorithm perform better than existing works for all cases, in terms of the training time of global model. Meanwhile, the training time of global model for the proposed approximation algorithm is very close to that for optimal solution generated by the brute-force approach for all cases. Besides, the integrated algorithm outperforms the existing works when the available storage capacity varies during the training.
期刊介绍:
The Computer Journal is one of the longest-established journals serving all branches of the academic computer science community. It is currently published in four sections.