{"title":"自私节点对联邦学习性能的影响","authors":"Boukhatem Youssef, Dargoul Jabir, A. Kobbane","doi":"10.1109/WINCOM55661.2022.9966471","DOIUrl":null,"url":null,"abstract":"Federated Learning is an artificial intelligence (AI) method that brings a new approach to ensure a high level of confiden-tiality during model creation. In federated Learning, instead of sending the local private data by the nodes to a central server, the server will initiate the process by sending the initial global model to the participants nodes that will perform the training phase based on their data locally and send back the results (Weights of the model) to the server for the aggregation, finally the server updates the global model based on this aggregation to make the model more efficient and send it to the nodes. Sometimes, it can be some selfish nodes injected into the partic-ipants, these selfish nodes send faulty results to the server, if the number of these nodes is minimum, the impact can be negligible, but if there are many of these selfish nodes, it deceives and pushes the whole global model to make false results. Hence, that will impact the other honest node's performances, since the server sends the model updates to them. In this research, we will focus on this problem and we will demonstrate how these selfish nodes can impact the performance of the global model completely, also we will propose a system model that can detect and eliminate them from the participants list.","PeriodicalId":128342,"journal":{"name":"2022 9th International Conference on Wireless Networks and Mobile Communications (WINCOM)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Impact of Selfish Nodes on Federated Learning Performances\",\"authors\":\"Boukhatem Youssef, Dargoul Jabir, A. Kobbane\",\"doi\":\"10.1109/WINCOM55661.2022.9966471\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning is an artificial intelligence (AI) method that brings a new approach to ensure a high level of confiden-tiality during model creation. In federated Learning, instead of sending the local private data by the nodes to a central server, the server will initiate the process by sending the initial global model to the participants nodes that will perform the training phase based on their data locally and send back the results (Weights of the model) to the server for the aggregation, finally the server updates the global model based on this aggregation to make the model more efficient and send it to the nodes. Sometimes, it can be some selfish nodes injected into the partic-ipants, these selfish nodes send faulty results to the server, if the number of these nodes is minimum, the impact can be negligible, but if there are many of these selfish nodes, it deceives and pushes the whole global model to make false results. Hence, that will impact the other honest node's performances, since the server sends the model updates to them. In this research, we will focus on this problem and we will demonstrate how these selfish nodes can impact the performance of the global model completely, also we will propose a system model that can detect and eliminate them from the participants list.\",\"PeriodicalId\":128342,\"journal\":{\"name\":\"2022 9th International Conference on Wireless Networks and Mobile Communications (WINCOM)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 9th International Conference on Wireless Networks and Mobile Communications (WINCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WINCOM55661.2022.9966471\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Wireless Networks and Mobile Communications (WINCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WINCOM55661.2022.9966471","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Impact of Selfish Nodes on Federated Learning Performances
Federated Learning is an artificial intelligence (AI) method that brings a new approach to ensure a high level of confiden-tiality during model creation. In federated Learning, instead of sending the local private data by the nodes to a central server, the server will initiate the process by sending the initial global model to the participants nodes that will perform the training phase based on their data locally and send back the results (Weights of the model) to the server for the aggregation, finally the server updates the global model based on this aggregation to make the model more efficient and send it to the nodes. Sometimes, it can be some selfish nodes injected into the partic-ipants, these selfish nodes send faulty results to the server, if the number of these nodes is minimum, the impact can be negligible, but if there are many of these selfish nodes, it deceives and pushes the whole global model to make false results. Hence, that will impact the other honest node's performances, since the server sends the model updates to them. In this research, we will focus on this problem and we will demonstrate how these selfish nodes can impact the performance of the global model completely, also we will propose a system model that can detect and eliminate them from the participants list.