{"title":"面向分布式智能的稀疏神经网络高效通信模型","authors":"Yiqiang Sheng, Jinlin Wang, Zhenyu Zhao","doi":"10.1109/INFCOMW.2016.7562131","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a communication-efficient model of sparse bidirectional neural network to intelligently process distributed data. The basic idea of the proposal is a modified bidirectional communication between the core and the edge of Internet by model parameters. The formulation and the procedures of the proposal are investigated. In theory, we prove that the proposed neural network is sparse, while a typical neural network is dense. In practice, a tree topology of computer cluster with a core machine and M edge machines is designed to implement the proposal, where M is the number of distributed datasets. The MNIST image database is split into M parts on the edge machines to simulate the distributed datasets from Internet of Things. Simulation shows the communication cost is greatly improved with the same level of accuracy in comparison to the state-of-the-art model. More importantly, it is naturally secure and private to communicate between the core machine and the edge machines through the model parameters, instead of the original data.","PeriodicalId":348177,"journal":{"name":"2016 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"A communication-efficient model of sparse neural network for distributed intelligence\",\"authors\":\"Yiqiang Sheng, Jinlin Wang, Zhenyu Zhao\",\"doi\":\"10.1109/INFCOMW.2016.7562131\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose a communication-efficient model of sparse bidirectional neural network to intelligently process distributed data. The basic idea of the proposal is a modified bidirectional communication between the core and the edge of Internet by model parameters. The formulation and the procedures of the proposal are investigated. In theory, we prove that the proposed neural network is sparse, while a typical neural network is dense. In practice, a tree topology of computer cluster with a core machine and M edge machines is designed to implement the proposal, where M is the number of distributed datasets. The MNIST image database is split into M parts on the edge machines to simulate the distributed datasets from Internet of Things. Simulation shows the communication cost is greatly improved with the same level of accuracy in comparison to the state-of-the-art model. More importantly, it is naturally secure and private to communicate between the core machine and the edge machines through the model parameters, instead of the original data.\",\"PeriodicalId\":348177,\"journal\":{\"name\":\"2016 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFCOMW.2016.7562131\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFCOMW.2016.7562131","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A communication-efficient model of sparse neural network for distributed intelligence
In this paper, we propose a communication-efficient model of sparse bidirectional neural network to intelligently process distributed data. The basic idea of the proposal is a modified bidirectional communication between the core and the edge of Internet by model parameters. The formulation and the procedures of the proposal are investigated. In theory, we prove that the proposed neural network is sparse, while a typical neural network is dense. In practice, a tree topology of computer cluster with a core machine and M edge machines is designed to implement the proposal, where M is the number of distributed datasets. The MNIST image database is split into M parts on the edge machines to simulate the distributed datasets from Internet of Things. Simulation shows the communication cost is greatly improved with the same level of accuracy in comparison to the state-of-the-art model. More importantly, it is naturally secure and private to communicate between the core machine and the edge machines through the model parameters, instead of the original data.