{"title":"使用区块链减少监督学习的多服务器边缘计算延迟","authors":"Anubhav Bhalla","doi":"10.1109/AISC56616.2023.10085285","DOIUrl":null,"url":null,"abstract":"This research investigates a novel issue as the block chain federated learning (BFL). In this system paradigm, block mining and machine learning (ML) model training are handled concurrently by in communication through a group named as a edges servers (ESs). We create the offloading techniques that enable the MD to send the required data to the linked ESs in order to aid in the ML training for the resource and constrained MDs. Then, based on a consensus method to create peer-to-peer (P2P)-based blockchain communications, we suggest a new decentralized approach at the edge layer. In order to reduce system latency, we propose that takes into account, MD transmits the required amount of power, channel bandwidth allocation for MD data offloading, MD computational allocation, and hash power allocation. With a parameterized advantage actor critic method, we offer a unique deep reinforcement learning scheme given the mixed action space of discrete offloading and continuous allocation variables. We theoretically define the aggregation latency, mini-batch size, and number of P2P communication rounds as the convergence properties of BFL. In terms of model training effectiveness, convergence rate, system latency, and robustness against model poisoning attempts, our numerical evaluation shows that our suggested approach outperforms baselines.","PeriodicalId":408520,"journal":{"name":"2023 International Conference on Artificial Intelligence and Smart Communication (AISC)","volume":"6 2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Using Blockchain to Reduce Multi-Server Edge Computing Latencies for Supervised Learning\",\"authors\":\"Anubhav Bhalla\",\"doi\":\"10.1109/AISC56616.2023.10085285\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This research investigates a novel issue as the block chain federated learning (BFL). In this system paradigm, block mining and machine learning (ML) model training are handled concurrently by in communication through a group named as a edges servers (ESs). We create the offloading techniques that enable the MD to send the required data to the linked ESs in order to aid in the ML training for the resource and constrained MDs. Then, based on a consensus method to create peer-to-peer (P2P)-based blockchain communications, we suggest a new decentralized approach at the edge layer. In order to reduce system latency, we propose that takes into account, MD transmits the required amount of power, channel bandwidth allocation for MD data offloading, MD computational allocation, and hash power allocation. With a parameterized advantage actor critic method, we offer a unique deep reinforcement learning scheme given the mixed action space of discrete offloading and continuous allocation variables. We theoretically define the aggregation latency, mini-batch size, and number of P2P communication rounds as the convergence properties of BFL. In terms of model training effectiveness, convergence rate, system latency, and robustness against model poisoning attempts, our numerical evaluation shows that our suggested approach outperforms baselines.\",\"PeriodicalId\":408520,\"journal\":{\"name\":\"2023 International Conference on Artificial Intelligence and Smart Communication (AISC)\",\"volume\":\"6 2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Artificial Intelligence and Smart Communication (AISC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AISC56616.2023.10085285\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Artificial Intelligence and Smart Communication (AISC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AISC56616.2023.10085285","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Using Blockchain to Reduce Multi-Server Edge Computing Latencies for Supervised Learning
This research investigates a novel issue as the block chain federated learning (BFL). In this system paradigm, block mining and machine learning (ML) model training are handled concurrently by in communication through a group named as a edges servers (ESs). We create the offloading techniques that enable the MD to send the required data to the linked ESs in order to aid in the ML training for the resource and constrained MDs. Then, based on a consensus method to create peer-to-peer (P2P)-based blockchain communications, we suggest a new decentralized approach at the edge layer. In order to reduce system latency, we propose that takes into account, MD transmits the required amount of power, channel bandwidth allocation for MD data offloading, MD computational allocation, and hash power allocation. With a parameterized advantage actor critic method, we offer a unique deep reinforcement learning scheme given the mixed action space of discrete offloading and continuous allocation variables. We theoretically define the aggregation latency, mini-batch size, and number of P2P communication rounds as the convergence properties of BFL. In terms of model training effectiveness, convergence rate, system latency, and robustness against model poisoning attempts, our numerical evaluation shows that our suggested approach outperforms baselines.