自私节点对联邦学习性能的影响

Boukhatem Youssef, Dargoul Jabir, A. Kobbane
{"title":"自私节点对联邦学习性能的影响","authors":"Boukhatem Youssef, Dargoul Jabir, A. Kobbane","doi":"10.1109/WINCOM55661.2022.9966471","DOIUrl":null,"url":null,"abstract":"Federated Learning is an artificial intelligence (AI) method that brings a new approach to ensure a high level of confiden-tiality during model creation. In federated Learning, instead of sending the local private data by the nodes to a central server, the server will initiate the process by sending the initial global model to the participants nodes that will perform the training phase based on their data locally and send back the results (Weights of the model) to the server for the aggregation, finally the server updates the global model based on this aggregation to make the model more efficient and send it to the nodes. Sometimes, it can be some selfish nodes injected into the partic-ipants, these selfish nodes send faulty results to the server, if the number of these nodes is minimum, the impact can be negligible, but if there are many of these selfish nodes, it deceives and pushes the whole global model to make false results. Hence, that will impact the other honest node's performances, since the server sends the model updates to them. In this research, we will focus on this problem and we will demonstrate how these selfish nodes can impact the performance of the global model completely, also we will propose a system model that can detect and eliminate them from the participants list.","PeriodicalId":128342,"journal":{"name":"2022 9th International Conference on Wireless Networks and Mobile Communications (WINCOM)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Impact of Selfish Nodes on Federated Learning Performances\",\"authors\":\"Boukhatem Youssef, Dargoul Jabir, A. Kobbane\",\"doi\":\"10.1109/WINCOM55661.2022.9966471\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning is an artificial intelligence (AI) method that brings a new approach to ensure a high level of confiden-tiality during model creation. In federated Learning, instead of sending the local private data by the nodes to a central server, the server will initiate the process by sending the initial global model to the participants nodes that will perform the training phase based on their data locally and send back the results (Weights of the model) to the server for the aggregation, finally the server updates the global model based on this aggregation to make the model more efficient and send it to the nodes. Sometimes, it can be some selfish nodes injected into the partic-ipants, these selfish nodes send faulty results to the server, if the number of these nodes is minimum, the impact can be negligible, but if there are many of these selfish nodes, it deceives and pushes the whole global model to make false results. Hence, that will impact the other honest node's performances, since the server sends the model updates to them. In this research, we will focus on this problem and we will demonstrate how these selfish nodes can impact the performance of the global model completely, also we will propose a system model that can detect and eliminate them from the participants list.\",\"PeriodicalId\":128342,\"journal\":{\"name\":\"2022 9th International Conference on Wireless Networks and Mobile Communications (WINCOM)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 9th International Conference on Wireless Networks and Mobile Communications (WINCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WINCOM55661.2022.9966471\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Wireless Networks and Mobile Communications (WINCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WINCOM55661.2022.9966471","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习是一种人工智能(AI)方法,它带来了一种在模型创建过程中确保高度机密性的新方法。联合学习,而不是当地的私人数据的节点发送到中央服务器,服务器将启动的过程向参与者节点初始全局模型,根据他们的数据在本地执行训练阶段并将结果发送回服务器(模型)的权重的聚合,最后服务器更新全球模型在此基础上聚合,使模型更有效并将其发送到节点。有时,它可以将一些自私的节点注入到参与者中,这些自私的节点向服务器发送错误的结果,如果这些节点的数量最少,影响可以忽略不计,但如果这些自私的节点很多,它就会欺骗和推动整个全局模型做出错误的结果。因此,这将影响其他诚实节点的性能,因为服务器将模型更新发送给它们。在本研究中,我们将重点关注这个问题,我们将展示这些自私节点如何完全影响全局模型的性能,并且我们将提出一个可以从参与者列表中检测和消除它们的系统模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Impact of Selfish Nodes on Federated Learning Performances
Federated Learning is an artificial intelligence (AI) method that brings a new approach to ensure a high level of confiden-tiality during model creation. In federated Learning, instead of sending the local private data by the nodes to a central server, the server will initiate the process by sending the initial global model to the participants nodes that will perform the training phase based on their data locally and send back the results (Weights of the model) to the server for the aggregation, finally the server updates the global model based on this aggregation to make the model more efficient and send it to the nodes. Sometimes, it can be some selfish nodes injected into the partic-ipants, these selfish nodes send faulty results to the server, if the number of these nodes is minimum, the impact can be negligible, but if there are many of these selfish nodes, it deceives and pushes the whole global model to make false results. Hence, that will impact the other honest node's performances, since the server sends the model updates to them. In this research, we will focus on this problem and we will demonstrate how these selfish nodes can impact the performance of the global model completely, also we will propose a system model that can detect and eliminate them from the participants list.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信