Guangxi Lu , Zuobin Xiong , Ruinian Li , Nael Mohammad , Yingshu Li , Wei Li
{"title":"DEFEAT:一种针对梯度攻击的去中心化联合学习","authors":"Guangxi Lu , Zuobin Xiong , Ruinian Li , Nael Mohammad , Yingshu Li , Wei Li","doi":"10.1016/j.hcc.2023.100128","DOIUrl":null,"url":null,"abstract":"<div><p>As one of the most promising machine learning frameworks emerging in recent years, Federated learning (FL) has received lots of attention. The main idea of centralized FL is to train a global model by aggregating local model parameters and maintain the private data of users locally. However, recent studies have shown that traditional centralized federated learning is vulnerable to various attacks, such as gradient attacks, where a malicious server collects local model gradients and uses them to recover the private data stored on the client. In this paper, we propose a decentralized federated learning against aTtacks (DEFEAT) framework and use it to defend the gradient attack. The decentralized structure adopted by this paper uses a peer-to-peer network to transmit, aggregate, and update local models. In DEFEAT, the participating clients only need to communicate with their single-hop neighbors to learn the global model, in which the model accuracy and communication cost during the training process of DEFEAT are well balanced. Through a series of experiments and detailed case studies on real datasets, we evaluate the excellent model performance of DEFEAT and the privacy preservation capability against gradient attacks.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"3 3","pages":"Article 100128"},"PeriodicalIF":3.2000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"DEFEAT: A decentralized federated learning against gradient attacks\",\"authors\":\"Guangxi Lu , Zuobin Xiong , Ruinian Li , Nael Mohammad , Yingshu Li , Wei Li\",\"doi\":\"10.1016/j.hcc.2023.100128\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>As one of the most promising machine learning frameworks emerging in recent years, Federated learning (FL) has received lots of attention. The main idea of centralized FL is to train a global model by aggregating local model parameters and maintain the private data of users locally. However, recent studies have shown that traditional centralized federated learning is vulnerable to various attacks, such as gradient attacks, where a malicious server collects local model gradients and uses them to recover the private data stored on the client. In this paper, we propose a decentralized federated learning against aTtacks (DEFEAT) framework and use it to defend the gradient attack. The decentralized structure adopted by this paper uses a peer-to-peer network to transmit, aggregate, and update local models. In DEFEAT, the participating clients only need to communicate with their single-hop neighbors to learn the global model, in which the model accuracy and communication cost during the training process of DEFEAT are well balanced. Through a series of experiments and detailed case studies on real datasets, we evaluate the excellent model performance of DEFEAT and the privacy preservation capability against gradient attacks.</p></div>\",\"PeriodicalId\":100605,\"journal\":{\"name\":\"High-Confidence Computing\",\"volume\":\"3 3\",\"pages\":\"Article 100128\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"High-Confidence Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667295223000260\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"High-Confidence Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667295223000260","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
DEFEAT: A decentralized federated learning against gradient attacks
As one of the most promising machine learning frameworks emerging in recent years, Federated learning (FL) has received lots of attention. The main idea of centralized FL is to train a global model by aggregating local model parameters and maintain the private data of users locally. However, recent studies have shown that traditional centralized federated learning is vulnerable to various attacks, such as gradient attacks, where a malicious server collects local model gradients and uses them to recover the private data stored on the client. In this paper, we propose a decentralized federated learning against aTtacks (DEFEAT) framework and use it to defend the gradient attack. The decentralized structure adopted by this paper uses a peer-to-peer network to transmit, aggregate, and update local models. In DEFEAT, the participating clients only need to communicate with their single-hop neighbors to learn the global model, in which the model accuracy and communication cost during the training process of DEFEAT are well balanced. Through a series of experiments and detailed case studies on real datasets, we evaluate the excellent model performance of DEFEAT and the privacy preservation capability against gradient attacks.