{"title":"分布式环境下变压器负载均衡优化","authors":"Delu Ma, Zhou Lei, Shengbo Chen, Peng-Cheng Wang","doi":"10.1109/ICPADS53394.2021.00109","DOIUrl":null,"url":null,"abstract":"In recent years, the demand for artificial intelligence applications has increased dramatically. Complex models can promote machine learning to achieve excellent results, but computing efficiency has gradually reached a bottleneck. Therefore, more researchers are exploring the improvement of the efficiency of intelligent computing systems. Distributed machine learning can improve the efficiency of model training and inference, but problems such as communication delay and load imbalance between computing nodes still exist. In the multi-GPU distributed computing environment, this paper takes the vision field algorithm VIT (vision transformer) as the optimization object, which has the advantage of convenient parallel training, and proposes several related solutions. Firstly, the parameter server is used as the system logic architecture and in order to reduce the idleness of the computing devices during the training process, the device working status query mechanism is designed to realize load balancing. Secondly, combined with the pre-trained small VIT algorithm model, semi-asynchronous communication method is proposed to reduce the communication overhead of computing devices and accelerate global convergence. The results of this experiment carried out in the existing distributed environment has demonstrated that compared with the existing synchronization method, the computational efficiency has been improved well under the premise of slightly reducing the accuracy.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"45 3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Load Balancing Optimization for Transformer in Distributed Environment\",\"authors\":\"Delu Ma, Zhou Lei, Shengbo Chen, Peng-Cheng Wang\",\"doi\":\"10.1109/ICPADS53394.2021.00109\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, the demand for artificial intelligence applications has increased dramatically. Complex models can promote machine learning to achieve excellent results, but computing efficiency has gradually reached a bottleneck. Therefore, more researchers are exploring the improvement of the efficiency of intelligent computing systems. Distributed machine learning can improve the efficiency of model training and inference, but problems such as communication delay and load imbalance between computing nodes still exist. In the multi-GPU distributed computing environment, this paper takes the vision field algorithm VIT (vision transformer) as the optimization object, which has the advantage of convenient parallel training, and proposes several related solutions. Firstly, the parameter server is used as the system logic architecture and in order to reduce the idleness of the computing devices during the training process, the device working status query mechanism is designed to realize load balancing. Secondly, combined with the pre-trained small VIT algorithm model, semi-asynchronous communication method is proposed to reduce the communication overhead of computing devices and accelerate global convergence. The results of this experiment carried out in the existing distributed environment has demonstrated that compared with the existing synchronization method, the computational efficiency has been improved well under the premise of slightly reducing the accuracy.\",\"PeriodicalId\":309508,\"journal\":{\"name\":\"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)\",\"volume\":\"45 3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPADS53394.2021.00109\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPADS53394.2021.00109","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Load Balancing Optimization for Transformer in Distributed Environment
In recent years, the demand for artificial intelligence applications has increased dramatically. Complex models can promote machine learning to achieve excellent results, but computing efficiency has gradually reached a bottleneck. Therefore, more researchers are exploring the improvement of the efficiency of intelligent computing systems. Distributed machine learning can improve the efficiency of model training and inference, but problems such as communication delay and load imbalance between computing nodes still exist. In the multi-GPU distributed computing environment, this paper takes the vision field algorithm VIT (vision transformer) as the optimization object, which has the advantage of convenient parallel training, and proposes several related solutions. Firstly, the parameter server is used as the system logic architecture and in order to reduce the idleness of the computing devices during the training process, the device working status query mechanism is designed to realize load balancing. Secondly, combined with the pre-trained small VIT algorithm model, semi-asynchronous communication method is proposed to reduce the communication overhead of computing devices and accelerate global convergence. The results of this experiment carried out in the existing distributed environment has demonstrated that compared with the existing synchronization method, the computational efficiency has been improved well under the premise of slightly reducing the accuracy.