Using Blockchain to Reduce Multi-Server Edge Computing Latencies for Supervised Learning

Anubhav Bhalla
{"title":"Using Blockchain to Reduce Multi-Server Edge Computing Latencies for Supervised Learning","authors":"Anubhav Bhalla","doi":"10.1109/AISC56616.2023.10085285","DOIUrl":null,"url":null,"abstract":"This research investigates a novel issue as the block chain federated learning (BFL). In this system paradigm, block mining and machine learning (ML) model training are handled concurrently by in communication through a group named as a edges servers (ESs). We create the offloading techniques that enable the MD to send the required data to the linked ESs in order to aid in the ML training for the resource and constrained MDs. Then, based on a consensus method to create peer-to-peer (P2P)-based blockchain communications, we suggest a new decentralized approach at the edge layer. In order to reduce system latency, we propose that takes into account, MD transmits the required amount of power, channel bandwidth allocation for MD data offloading, MD computational allocation, and hash power allocation. With a parameterized advantage actor critic method, we offer a unique deep reinforcement learning scheme given the mixed action space of discrete offloading and continuous allocation variables. We theoretically define the aggregation latency, mini-batch size, and number of P2P communication rounds as the convergence properties of BFL. In terms of model training effectiveness, convergence rate, system latency, and robustness against model poisoning attempts, our numerical evaluation shows that our suggested approach outperforms baselines.","PeriodicalId":408520,"journal":{"name":"2023 International Conference on Artificial Intelligence and Smart Communication (AISC)","volume":"6 2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Artificial Intelligence and Smart Communication (AISC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AISC56616.2023.10085285","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

This research investigates a novel issue as the block chain federated learning (BFL). In this system paradigm, block mining and machine learning (ML) model training are handled concurrently by in communication through a group named as a edges servers (ESs). We create the offloading techniques that enable the MD to send the required data to the linked ESs in order to aid in the ML training for the resource and constrained MDs. Then, based on a consensus method to create peer-to-peer (P2P)-based blockchain communications, we suggest a new decentralized approach at the edge layer. In order to reduce system latency, we propose that takes into account, MD transmits the required amount of power, channel bandwidth allocation for MD data offloading, MD computational allocation, and hash power allocation. With a parameterized advantage actor critic method, we offer a unique deep reinforcement learning scheme given the mixed action space of discrete offloading and continuous allocation variables. We theoretically define the aggregation latency, mini-batch size, and number of P2P communication rounds as the convergence properties of BFL. In terms of model training effectiveness, convergence rate, system latency, and robustness against model poisoning attempts, our numerical evaluation shows that our suggested approach outperforms baselines.
使用区块链减少监督学习的多服务器边缘计算延迟
本研究探讨了区块链联邦学习(BFL)这个新问题。在该系统范例中,块挖掘和机器学习(ML)模型训练通过一个称为边缘服务器(ESs)的组进行通信并发处理。我们创建了卸载技术,使MD能够将所需的数据发送到链接的ESs,以帮助对资源和受约束的MD进行ML训练。然后,基于共识方法创建基于点对点(P2P)的区块链通信,我们在边缘层提出了一种新的去中心化方法。为了减少系统延迟,我们建议考虑MD传输所需的功率、MD数据卸载的信道带宽分配、MD计算分配和哈希功率分配。利用参数化优势行为者评价方法,给出了一种独特的深度强化学习方案,该方案具有离散卸载变量和连续分配变量的混合作用空间。我们从理论上将聚合延迟、小批大小和P2P通信轮数定义为BFL的收敛特性。在模型训练效率、收敛速度、系统延迟和对模型中毒尝试的鲁棒性方面,我们的数值评估表明,我们建议的方法优于基线。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信