DART:分散联合学习模型稳健性分析解决方案

IF 2.3 Q2 COMPUTER SCIENCE, THEORY & METHODS
Array Pub Date : 2024-09-01 DOI:10.1016/j.array.2024.100360
Chao Feng , Alberto Huertas Celdrán , Jan von der Assen , Enrique Tomás Martínez Beltrán , Gérôme Bovet , Burkhard Stiller
{"title":"DART:分散联合学习模型稳健性分析解决方案","authors":"Chao Feng ,&nbsp;Alberto Huertas Celdrán ,&nbsp;Jan von der Assen ,&nbsp;Enrique Tomás Martínez Beltrán ,&nbsp;Gérôme Bovet ,&nbsp;Burkhard Stiller","doi":"10.1016/j.array.2024.100360","DOIUrl":null,"url":null,"abstract":"<div><p>Federated Learning (FL) has emerged as a promising approach to address privacy concerns inherent in Machine Learning (ML) practices. However, conventional FL methods, particularly those following the Centralized FL (CFL) paradigm, utilize a central server for global aggregation, which exhibits limitations such as bottleneck and single point of failure. To address these issues, the Decentralized FL (DFL) paradigm has been proposed, which removes the client–server boundary and enables all participants to engage in model training and aggregation tasks. Nevertheless, as CFL, DFL remains vulnerable to adversarial attacks, notably poisoning attacks that undermine model performance. While existing research on model robustness has predominantly focused on CFL, there is a noteworthy gap in understanding the model robustness of the DFL paradigm. In this paper, a thorough review of poisoning attacks targeting the model robustness in DFL systems, as well as their corresponding countermeasures, are presented. Additionally, a solution called <em>DART</em> is proposed to evaluate the robustness of DFL models, which is implemented and integrated into a DFL platform. Through extensive experiments, this paper compares the behavior of CFL and DFL under diverse poisoning attacks, pinpointing key factors affecting attack spread and effectiveness within the DFL. It also evaluates the performance of different defense mechanisms and investigates whether defense mechanisms designed for CFL are compatible with DFL. The empirical results provide insights into research challenges and suggest ways to improve the robustness of DFL models for future research.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"23 ","pages":"Article 100360"},"PeriodicalIF":2.3000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2590005624000262/pdfft?md5=435488fb30eb056a2cc218da941ac1cf&pid=1-s2.0-S2590005624000262-main.pdf","citationCount":"0","resultStr":"{\"title\":\"DART: A Solution for decentralized federated learning model robustness analysis\",\"authors\":\"Chao Feng ,&nbsp;Alberto Huertas Celdrán ,&nbsp;Jan von der Assen ,&nbsp;Enrique Tomás Martínez Beltrán ,&nbsp;Gérôme Bovet ,&nbsp;Burkhard Stiller\",\"doi\":\"10.1016/j.array.2024.100360\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Federated Learning (FL) has emerged as a promising approach to address privacy concerns inherent in Machine Learning (ML) practices. However, conventional FL methods, particularly those following the Centralized FL (CFL) paradigm, utilize a central server for global aggregation, which exhibits limitations such as bottleneck and single point of failure. To address these issues, the Decentralized FL (DFL) paradigm has been proposed, which removes the client–server boundary and enables all participants to engage in model training and aggregation tasks. Nevertheless, as CFL, DFL remains vulnerable to adversarial attacks, notably poisoning attacks that undermine model performance. While existing research on model robustness has predominantly focused on CFL, there is a noteworthy gap in understanding the model robustness of the DFL paradigm. In this paper, a thorough review of poisoning attacks targeting the model robustness in DFL systems, as well as their corresponding countermeasures, are presented. Additionally, a solution called <em>DART</em> is proposed to evaluate the robustness of DFL models, which is implemented and integrated into a DFL platform. Through extensive experiments, this paper compares the behavior of CFL and DFL under diverse poisoning attacks, pinpointing key factors affecting attack spread and effectiveness within the DFL. It also evaluates the performance of different defense mechanisms and investigates whether defense mechanisms designed for CFL are compatible with DFL. The empirical results provide insights into research challenges and suggest ways to improve the robustness of DFL models for future research.</p></div>\",\"PeriodicalId\":8417,\"journal\":{\"name\":\"Array\",\"volume\":\"23 \",\"pages\":\"Article 100360\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2024-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2590005624000262/pdfft?md5=435488fb30eb056a2cc218da941ac1cf&pid=1-s2.0-S2590005624000262-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Array\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2590005624000262\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005624000262","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

联合学习(FL)已成为解决机器学习(ML)实践中固有的隐私问题的一种有前途的方法。然而,传统的联机学习方法,尤其是那些遵循集中式联机学习(CFL)范式的方法,利用中央服务器进行全局聚合,存在瓶颈和单点故障等局限性。为了解决这些问题,有人提出了分散式 FL(DFL)范例,它消除了客户端与服务器之间的界限,使所有参与者都能参与模型训练和聚合任务。然而,与 CFL 一样,DFL 仍然容易受到恶意攻击,特别是破坏模型性能的中毒攻击。虽然现有的模型鲁棒性研究主要集中在 CFL 上,但在了解 DFL 范例的模型鲁棒性方面还存在值得注意的差距。本文全面回顾了针对 DFL 系统模型鲁棒性的中毒攻击及其相应对策。此外,本文还提出了一种名为 DART 的解决方案来评估 DFL 模型的鲁棒性,并将其实施和集成到 DFL 平台中。通过大量实验,本文比较了 CFL 和 DFL 在各种中毒攻击下的行为,指出了影响 DFL 内攻击传播和有效性的关键因素。本文还评估了不同防御机制的性能,并研究了为 CFL 设计的防御机制是否与 DFL 兼容。实证结果为研究挑战提供了见解,并为未来研究提出了提高 DFL 模型稳健性的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DART: A Solution for decentralized federated learning model robustness analysis

Federated Learning (FL) has emerged as a promising approach to address privacy concerns inherent in Machine Learning (ML) practices. However, conventional FL methods, particularly those following the Centralized FL (CFL) paradigm, utilize a central server for global aggregation, which exhibits limitations such as bottleneck and single point of failure. To address these issues, the Decentralized FL (DFL) paradigm has been proposed, which removes the client–server boundary and enables all participants to engage in model training and aggregation tasks. Nevertheless, as CFL, DFL remains vulnerable to adversarial attacks, notably poisoning attacks that undermine model performance. While existing research on model robustness has predominantly focused on CFL, there is a noteworthy gap in understanding the model robustness of the DFL paradigm. In this paper, a thorough review of poisoning attacks targeting the model robustness in DFL systems, as well as their corresponding countermeasures, are presented. Additionally, a solution called DART is proposed to evaluate the robustness of DFL models, which is implemented and integrated into a DFL platform. Through extensive experiments, this paper compares the behavior of CFL and DFL under diverse poisoning attacks, pinpointing key factors affecting attack spread and effectiveness within the DFL. It also evaluates the performance of different defense mechanisms and investigates whether defense mechanisms designed for CFL are compatible with DFL. The empirical results provide insights into research challenges and suggest ways to improve the robustness of DFL models for future research.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Array
Array Computer Science-General Computer Science
CiteScore
4.40
自引率
0.00%
发文量
93
审稿时长
45 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信