Alp Berke Ardic, H. Seferoglu, S. Rouayheb, Erdem Koyuncu
{"title":"基于边缘网络分散学习的随机行走蛇","authors":"Alp Berke Ardic, H. Seferoglu, S. Rouayheb, Erdem Koyuncu","doi":"10.1109/LANMAN58293.2023.10189426","DOIUrl":null,"url":null,"abstract":"Random walk learning (RWL) has recently gained a lot of attention thanks to its potential for reducing communication and computation over edge networks in a decentralized fashion. In RWL, each node in a graph updates a global model with its local data, selects one of its neighbors randomly, and sends the updated global model. The selected neighbor becomes a newly activated node, so it updates the global model using its local data. This continues until convergence. Despite its promise, RWL has two challenges: (i) training time is long, and (ii) nodes should have the complete model. Thus, in this paper, we design Random Walking Snakes (RWS), where a set of nodes instead of one node is activated for model update, and each node in the set trains a part of the model. Thanks to model partitioning and parallel processing in the set of activated nodes, RWS reduces both the training time and the amount of the model that needs to be stored. We also design a novel policy that determines the set of activated nodes by taking into account the computing power of nodes. Simulation results show that RWS significantly reduces the convergence time as compared to RWL.","PeriodicalId":416011,"journal":{"name":"2023 IEEE 29th International Symposium on Local and Metropolitan Area Networks (LANMAN)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Random Walking Snakes for Decentralized Learning at Edge Networks\",\"authors\":\"Alp Berke Ardic, H. Seferoglu, S. Rouayheb, Erdem Koyuncu\",\"doi\":\"10.1109/LANMAN58293.2023.10189426\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Random walk learning (RWL) has recently gained a lot of attention thanks to its potential for reducing communication and computation over edge networks in a decentralized fashion. In RWL, each node in a graph updates a global model with its local data, selects one of its neighbors randomly, and sends the updated global model. The selected neighbor becomes a newly activated node, so it updates the global model using its local data. This continues until convergence. Despite its promise, RWL has two challenges: (i) training time is long, and (ii) nodes should have the complete model. Thus, in this paper, we design Random Walking Snakes (RWS), where a set of nodes instead of one node is activated for model update, and each node in the set trains a part of the model. Thanks to model partitioning and parallel processing in the set of activated nodes, RWS reduces both the training time and the amount of the model that needs to be stored. We also design a novel policy that determines the set of activated nodes by taking into account the computing power of nodes. Simulation results show that RWS significantly reduces the convergence time as compared to RWL.\",\"PeriodicalId\":416011,\"journal\":{\"name\":\"2023 IEEE 29th International Symposium on Local and Metropolitan Area Networks (LANMAN)\",\"volume\":\"101 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 29th International Symposium on Local and Metropolitan Area Networks (LANMAN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/LANMAN58293.2023.10189426\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 29th International Symposium on Local and Metropolitan Area Networks (LANMAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LANMAN58293.2023.10189426","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Random Walking Snakes for Decentralized Learning at Edge Networks
Random walk learning (RWL) has recently gained a lot of attention thanks to its potential for reducing communication and computation over edge networks in a decentralized fashion. In RWL, each node in a graph updates a global model with its local data, selects one of its neighbors randomly, and sends the updated global model. The selected neighbor becomes a newly activated node, so it updates the global model using its local data. This continues until convergence. Despite its promise, RWL has two challenges: (i) training time is long, and (ii) nodes should have the complete model. Thus, in this paper, we design Random Walking Snakes (RWS), where a set of nodes instead of one node is activated for model update, and each node in the set trains a part of the model. Thanks to model partitioning and parallel processing in the set of activated nodes, RWS reduces both the training time and the amount of the model that needs to be stored. We also design a novel policy that determines the set of activated nodes by taking into account the computing power of nodes. Simulation results show that RWS significantly reduces the convergence time as compared to RWL.