Trusted Decentralized Federated Learning

Anousheh Gholami, Nariman Torkzaban, J. Baras
{"title":"Trusted Decentralized Federated Learning","authors":"Anousheh Gholami, Nariman Torkzaban, J. Baras","doi":"10.1109/CCNC49033.2022.9700624","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) has received significant attention from both academia and industry, as an emerging paradigm for building machine learning models in a communication-efficient and privacy preserving manner. It enables potentially a massive number of resource constrained agents (e.g. mobile devices and IoT devices) to train a model by a repeated process of local training on agents and centralized model aggregation on a central server. To overcome the single-point-of-failure and scalability issues of the traditional FL frameworks, decentralized (server-less) FL has been proposed. In a decentralized FL setting, agents implement consensus techniques by exchanging local model updates. Despite bypassing the direct exchange of raw data between the collaborating agents, this scheme is still vulnerable to various security and privacy threats such as data poisoning attack.In this paper, we propose trust as a metric to measure the trustworthiness of the FL agents and thereby enhance the security of the FL training. We first elaborate on trust as a security metric by presenting a mathematical framework for trust computation and aggregation within a multi-agent system. We then discuss how this framework can be incorporated within a decentralized FL setup introducing the trusted decentralized FL algorithm. Finally, we validate our theoretical findings by means of numerical experiments.","PeriodicalId":269305,"journal":{"name":"2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC)","volume":"73 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCNC49033.2022.9700624","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Federated learning (FL) has received significant attention from both academia and industry, as an emerging paradigm for building machine learning models in a communication-efficient and privacy preserving manner. It enables potentially a massive number of resource constrained agents (e.g. mobile devices and IoT devices) to train a model by a repeated process of local training on agents and centralized model aggregation on a central server. To overcome the single-point-of-failure and scalability issues of the traditional FL frameworks, decentralized (server-less) FL has been proposed. In a decentralized FL setting, agents implement consensus techniques by exchanging local model updates. Despite bypassing the direct exchange of raw data between the collaborating agents, this scheme is still vulnerable to various security and privacy threats such as data poisoning attack.In this paper, we propose trust as a metric to measure the trustworthiness of the FL agents and thereby enhance the security of the FL training. We first elaborate on trust as a security metric by presenting a mathematical framework for trust computation and aggregation within a multi-agent system. We then discuss how this framework can be incorporated within a decentralized FL setup introducing the trusted decentralized FL algorithm. Finally, we validate our theoretical findings by means of numerical experiments.
可信分散联邦学习
联邦学习(FL)作为一种以高效通信和保护隐私的方式构建机器学习模型的新兴范例,受到了学术界和工业界的极大关注。它使潜在的大量资源受限的代理(例如移动设备和物联网设备)能够通过在代理上进行本地训练和在中央服务器上集中模型聚合的重复过程来训练模型。为了克服传统FL框架的单点故障和可扩展性问题,分散式(无服务器)FL被提出。在分散的FL设置中,代理通过交换本地模型更新来实现共识技术。尽管该方案绕过了协作代理之间的原始数据直接交换,但仍然容易受到各种安全和隐私威胁,例如数据中毒攻击。在本文中,我们提出了信任作为衡量FL智能体可信度的度量,从而提高FL训练的安全性。我们首先通过提出多智能体系统中信任计算和聚合的数学框架,详细阐述了作为安全度量的信任。然后,我们讨论如何将该框架合并到引入可信分散FL算法的分散FL设置中。最后,通过数值实验对理论结果进行了验证。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信