{"title":"Trusted Decentralized Federated Learning","authors":"Anousheh Gholami, Nariman Torkzaban, J. Baras","doi":"10.1109/CCNC49033.2022.9700624","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) has received significant attention from both academia and industry, as an emerging paradigm for building machine learning models in a communication-efficient and privacy preserving manner. It enables potentially a massive number of resource constrained agents (e.g. mobile devices and IoT devices) to train a model by a repeated process of local training on agents and centralized model aggregation on a central server. To overcome the single-point-of-failure and scalability issues of the traditional FL frameworks, decentralized (server-less) FL has been proposed. In a decentralized FL setting, agents implement consensus techniques by exchanging local model updates. Despite bypassing the direct exchange of raw data between the collaborating agents, this scheme is still vulnerable to various security and privacy threats such as data poisoning attack.In this paper, we propose trust as a metric to measure the trustworthiness of the FL agents and thereby enhance the security of the FL training. We first elaborate on trust as a security metric by presenting a mathematical framework for trust computation and aggregation within a multi-agent system. We then discuss how this framework can be incorporated within a decentralized FL setup introducing the trusted decentralized FL algorithm. Finally, we validate our theoretical findings by means of numerical experiments.","PeriodicalId":269305,"journal":{"name":"2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC)","volume":"73 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCNC49033.2022.9700624","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
Federated learning (FL) has received significant attention from both academia and industry, as an emerging paradigm for building machine learning models in a communication-efficient and privacy preserving manner. It enables potentially a massive number of resource constrained agents (e.g. mobile devices and IoT devices) to train a model by a repeated process of local training on agents and centralized model aggregation on a central server. To overcome the single-point-of-failure and scalability issues of the traditional FL frameworks, decentralized (server-less) FL has been proposed. In a decentralized FL setting, agents implement consensus techniques by exchanging local model updates. Despite bypassing the direct exchange of raw data between the collaborating agents, this scheme is still vulnerable to various security and privacy threats such as data poisoning attack.In this paper, we propose trust as a metric to measure the trustworthiness of the FL agents and thereby enhance the security of the FL training. We first elaborate on trust as a security metric by presenting a mathematical framework for trust computation and aggregation within a multi-agent system. We then discuss how this framework can be incorporated within a decentralized FL setup introducing the trusted decentralized FL algorithm. Finally, we validate our theoretical findings by means of numerical experiments.