{"title":"Research on Network Architecture Design Based on Artificial Intelligence Application Technology","authors":"Jinge Guo","doi":"10.1145/3514105.3514124","DOIUrl":null,"url":null,"abstract":"Abstract: With the continuous development of AI technology, the training of massive data and the emergence of large-scale models have made stand-alone model training increasingly unable to meet the needs of AI applications. Distributed machine learning technologies (such as data parallelism and model parallelism) have appeared at historic moments and will have extreme large-scale application scenarios. At present, the training speed of distributed machine learning models is slow, and the scale of model parameters is still the main problem in this field. From the perspective of model parallelism, this article aims to design the optimal division method for different models under model parallelism by analyzing the structure of the existing AI application model. According to the framework structure of artificial intelligence application model, design the model optimization partition strategy and model based on parallelism. A network architecture suitable for accelerating AI application training, focusing on solving technical problems, such as network architecture design based on AI applications and model optimization and partitioning under model parallelization.","PeriodicalId":360718,"journal":{"name":"Proceedings of the 2022 9th International Conference on Wireless Communication and Sensor Networks","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 9th International Conference on Wireless Communication and Sensor Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3514105.3514124","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract: With the continuous development of AI technology, the training of massive data and the emergence of large-scale models have made stand-alone model training increasingly unable to meet the needs of AI applications. Distributed machine learning technologies (such as data parallelism and model parallelism) have appeared at historic moments and will have extreme large-scale application scenarios. At present, the training speed of distributed machine learning models is slow, and the scale of model parameters is still the main problem in this field. From the perspective of model parallelism, this article aims to design the optimal division method for different models under model parallelism by analyzing the structure of the existing AI application model. According to the framework structure of artificial intelligence application model, design the model optimization partition strategy and model based on parallelism. A network architecture suitable for accelerating AI application training, focusing on solving technical problems, such as network architecture design based on AI applications and model optimization and partitioning under model parallelization.