Idoia Gamiz , Cristina Regueiro , Eduardo Jacob , Oscar Lage , Marivi Higuero
{"title":"PRoT-FL: A privacy-preserving and robust Training Manager for Federated Learning","authors":"Idoia Gamiz , Cristina Regueiro , Eduardo Jacob , Oscar Lage , Marivi Higuero","doi":"10.1016/j.ipm.2024.103929","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning emerged as a promising solution to enable collaborative training between organizations while avoiding centralization. However, it remains vulnerable to privacy breaches and attacks that compromise model robustness, such as data and model poisoning. This work presents PRoT-FL, a privacy-preserving and robust Training Manager capable of coordinating different training sessions at the same time. PRoT-FL conducts each training session through a Federated Learning scheme that is resistant to privacy attacks while ensuring robustness. To do so, the model exchange is conducted by a “Private Training Protocol” through secure channels and the protocol is combined with a public blockchain network to provide auditability, integrity and transparency. The original contribution of this work includes: (i) the proposal of a “Private Training Protocol” that breaks the link between a model and its generator, (ii) the integration of this protocol into a complete system, PRoT-FL, which acts as an orchestrator and manages multiple trainings and (iii) a privacy, robustness and performance evaluation. The theoretical analysis shows that PRoT-FL is suitable for a wide range of scenarios, being capable of dealing with multiple privacy attacks while maintaining a flexible selection of methods against attacks that compromise robustness. The experimental results are conducted using three benchmark datasets and compared with traditional Federated Learning using different robust aggregation rules. The results show that those rules still apply to PRoT-FL and that the accuracy of the final model is not degraded while maintaining data privacy.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103929"},"PeriodicalIF":7.4000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457324002887","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning emerged as a promising solution to enable collaborative training between organizations while avoiding centralization. However, it remains vulnerable to privacy breaches and attacks that compromise model robustness, such as data and model poisoning. This work presents PRoT-FL, a privacy-preserving and robust Training Manager capable of coordinating different training sessions at the same time. PRoT-FL conducts each training session through a Federated Learning scheme that is resistant to privacy attacks while ensuring robustness. To do so, the model exchange is conducted by a “Private Training Protocol” through secure channels and the protocol is combined with a public blockchain network to provide auditability, integrity and transparency. The original contribution of this work includes: (i) the proposal of a “Private Training Protocol” that breaks the link between a model and its generator, (ii) the integration of this protocol into a complete system, PRoT-FL, which acts as an orchestrator and manages multiple trainings and (iii) a privacy, robustness and performance evaluation. The theoretical analysis shows that PRoT-FL is suitable for a wide range of scenarios, being capable of dealing with multiple privacy attacks while maintaining a flexible selection of methods against attacks that compromise robustness. The experimental results are conducted using three benchmark datasets and compared with traditional Federated Learning using different robust aggregation rules. The results show that those rules still apply to PRoT-FL and that the accuracy of the final model is not degraded while maintaining data privacy.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.