层次联邦学习中对抗性攻击下的深度学习分析

IF 3 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Duaa S. Alqattan , Vaclav Snasel , Rajiv Ranjan , Varun Ojha
{"title":"层次联邦学习中对抗性攻击下的深度学习分析","authors":"Duaa S. Alqattan ,&nbsp;Vaclav Snasel ,&nbsp;Rajiv Ranjan ,&nbsp;Varun Ojha","doi":"10.1016/j.hcc.2025.100321","DOIUrl":null,"url":null,"abstract":"<div><div>Hierarchical Federated Learning (HFL) extends traditional Federated Learning (FL) by introducing multi-level aggregation in which model updates pass through clients, edge servers, and a global server. While this hierarchical structure enhances scalability, it also increases vulnerability to adversarial attacks — such as data poisoning and model poisoning — that disrupt learning by introducing discrepancies at the edge server level. These discrepancies propagate through aggregation, affecting model consistency and overall integrity. Existing studies on adversarial behaviour in FL primarily rely on single-metric approaches — such as cosine similarity or Euclidean distance — to assess model discrepancies and filter out anomalous updates. However, these methods fail to capture the diverse ways adversarial attacks influence model updates, particularly in highly heterogeneous data environments and hierarchical structures. Attackers can exploit the limitations of single-metric defences by crafting updates that seem benign under one metric while remaining anomalous under another. Moreover, prior studies have not systematically analysed how model discrepancies evolve over time, vary across regions, or affect clustering structures in HFL architectures. To address these limitations, we propose the Model Discrepancy Score (MDS), a multi-metric framework that integrates Dissimilarity, Distance, Uncorrelation, and Divergence to provide a comprehensive analysis of how adversarial activity affects model discrepancies. Through temporal, spatial, and clustering analyses, we examine how attacks affect model discrepancies at the edge server level in 3LHFL and 4LHFL architectures and evaluate MDS’s ability to distinguish between benign and malicious servers. Our results show that while 4LHFL effectively mitigates discrepancies in regional attack scenarios, it struggles with distributed attacks due to additional aggregation layers that obscure distinguishable discrepancy patterns over time, across regions, and within clustering structures. Factors influencing detection include data heterogeneity, attack sophistication, and hierarchical aggregation depth. These findings highlight the limitations of single-metric approaches and emphasize the need for multi-metric strategies such as MDS to enhance HFL security.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100321"},"PeriodicalIF":3.0000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analysis of deep learning under adversarial attacks in hierarchical federated learning\",\"authors\":\"Duaa S. Alqattan ,&nbsp;Vaclav Snasel ,&nbsp;Rajiv Ranjan ,&nbsp;Varun Ojha\",\"doi\":\"10.1016/j.hcc.2025.100321\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Hierarchical Federated Learning (HFL) extends traditional Federated Learning (FL) by introducing multi-level aggregation in which model updates pass through clients, edge servers, and a global server. While this hierarchical structure enhances scalability, it also increases vulnerability to adversarial attacks — such as data poisoning and model poisoning — that disrupt learning by introducing discrepancies at the edge server level. These discrepancies propagate through aggregation, affecting model consistency and overall integrity. Existing studies on adversarial behaviour in FL primarily rely on single-metric approaches — such as cosine similarity or Euclidean distance — to assess model discrepancies and filter out anomalous updates. However, these methods fail to capture the diverse ways adversarial attacks influence model updates, particularly in highly heterogeneous data environments and hierarchical structures. Attackers can exploit the limitations of single-metric defences by crafting updates that seem benign under one metric while remaining anomalous under another. Moreover, prior studies have not systematically analysed how model discrepancies evolve over time, vary across regions, or affect clustering structures in HFL architectures. To address these limitations, we propose the Model Discrepancy Score (MDS), a multi-metric framework that integrates Dissimilarity, Distance, Uncorrelation, and Divergence to provide a comprehensive analysis of how adversarial activity affects model discrepancies. Through temporal, spatial, and clustering analyses, we examine how attacks affect model discrepancies at the edge server level in 3LHFL and 4LHFL architectures and evaluate MDS’s ability to distinguish between benign and malicious servers. Our results show that while 4LHFL effectively mitigates discrepancies in regional attack scenarios, it struggles with distributed attacks due to additional aggregation layers that obscure distinguishable discrepancy patterns over time, across regions, and within clustering structures. Factors influencing detection include data heterogeneity, attack sophistication, and hierarchical aggregation depth. These findings highlight the limitations of single-metric approaches and emphasize the need for multi-metric strategies such as MDS to enhance HFL security.</div></div>\",\"PeriodicalId\":100605,\"journal\":{\"name\":\"High-Confidence Computing\",\"volume\":\"5 4\",\"pages\":\"Article 100321\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"High-Confidence Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S266729522500025X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"High-Confidence Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S266729522500025X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

分层联邦学习(HFL)通过引入多级聚合来扩展传统的联邦学习(FL),其中模型更新通过客户机、边缘服务器和全局服务器进行传递。虽然这种分层结构增强了可伸缩性,但它也增加了对抗性攻击的脆弱性——例如数据中毒和模型中毒——它们通过在边缘服务器级别引入差异来破坏学习。这些差异通过聚合传播,影响模型一致性和整体完整性。现有的关于FL对抗行为的研究主要依赖于单度量方法——如余弦相似度或欧几里得距离——来评估模型差异并过滤掉异常更新。然而,这些方法无法捕捉对抗性攻击影响模型更新的各种方式,特别是在高度异构的数据环境和分层结构中。攻击者可以利用单一指标防御的局限性,制作在一个指标下看起来无害,而在另一个指标下仍然异常的更新。此外,先前的研究并没有系统地分析模型差异如何随时间演变,如何在不同地区变化,或如何影响HFL架构中的聚类结构。为了解决这些限制,我们提出了模型差异评分(MDS),这是一个集成了不相似性、距离、不相关性和差异的多度量框架,以提供对抗性活动如何影响模型差异的全面分析。通过时间、空间和聚类分析,我们研究了攻击如何影响3LHFL和4LHFL架构中边缘服务器级别的模型差异,并评估了MDS区分良性和恶意服务器的能力。我们的结果表明,虽然4LHFL有效地减轻了区域攻击场景中的差异,但由于额外的聚合层模糊了时间、区域和集群结构内可区分的差异模式,因此它难以应对分布式攻击。影响检测的因素包括数据异构性、攻击复杂程度和分层聚合深度。这些发现突出了单度量方法的局限性,并强调需要多度量策略,如MDS来增强HFL安全性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Analysis of deep learning under adversarial attacks in hierarchical federated learning
Hierarchical Federated Learning (HFL) extends traditional Federated Learning (FL) by introducing multi-level aggregation in which model updates pass through clients, edge servers, and a global server. While this hierarchical structure enhances scalability, it also increases vulnerability to adversarial attacks — such as data poisoning and model poisoning — that disrupt learning by introducing discrepancies at the edge server level. These discrepancies propagate through aggregation, affecting model consistency and overall integrity. Existing studies on adversarial behaviour in FL primarily rely on single-metric approaches — such as cosine similarity or Euclidean distance — to assess model discrepancies and filter out anomalous updates. However, these methods fail to capture the diverse ways adversarial attacks influence model updates, particularly in highly heterogeneous data environments and hierarchical structures. Attackers can exploit the limitations of single-metric defences by crafting updates that seem benign under one metric while remaining anomalous under another. Moreover, prior studies have not systematically analysed how model discrepancies evolve over time, vary across regions, or affect clustering structures in HFL architectures. To address these limitations, we propose the Model Discrepancy Score (MDS), a multi-metric framework that integrates Dissimilarity, Distance, Uncorrelation, and Divergence to provide a comprehensive analysis of how adversarial activity affects model discrepancies. Through temporal, spatial, and clustering analyses, we examine how attacks affect model discrepancies at the edge server level in 3LHFL and 4LHFL architectures and evaluate MDS’s ability to distinguish between benign and malicious servers. Our results show that while 4LHFL effectively mitigates discrepancies in regional attack scenarios, it struggles with distributed attacks due to additional aggregation layers that obscure distinguishable discrepancy patterns over time, across regions, and within clustering structures. Factors influencing detection include data heterogeneity, attack sophistication, and hierarchical aggregation depth. These findings highlight the limitations of single-metric approaches and emphasize the need for multi-metric strategies such as MDS to enhance HFL security.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信