联邦学习中的公平性:趋势、挑战和机遇

IF 6.1 Q1 AUTOMATION & CONTROL SYSTEMS
Noorain Mukhtiar, Adnan Mahmood, Quan Z. Sheng
{"title":"联邦学习中的公平性:趋势、挑战和机遇","authors":"Noorain Mukhtiar,&nbsp;Adnan Mahmood,&nbsp;Quan Z. Sheng","doi":"10.1002/aisy.202400836","DOIUrl":null,"url":null,"abstract":"<p>At the intersection of the cutting-edge technologies and privacy concerns, federated learning (FL) with its distributed architecture, stands at the forefront in a bid to facilitate collaborative model training across multiple clients while preserving data privacy. However, the applicability of FL systems is hindered by fairness concerns arising from numerous sources of heterogeneity that can result in biases and undermine a system's effectiveness, with skewed predictions, reduced accuracy, and inefficient model convergence. This survey thus explores the diverse sources of bias, including but not limited to, data, client, and model biases, and thoroughly discusses the strengths and limitations inherited within the array of the state-of-the-art techniques utilized in the literature to mitigate such disparities in the FL training process. A comprehensive overview of the several notions, theoretical underpinnings, and technical aspects associated with fairness and their adoption in FL-based multidisciplinary environments are delineated. Furthermore, the salient evaluation metrics leveraged to measure fairness quantitatively are examined. Finally, exciting open research directions that have the potential to drive future advancements in achieving fairer FL frameworks, in turn, offering a strong foundation for future research in this pivotal area are envisaged.</p>","PeriodicalId":93858,"journal":{"name":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","volume":"7 6","pages":""},"PeriodicalIF":6.1000,"publicationDate":"2025-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aisy.202400836","citationCount":"0","resultStr":"{\"title\":\"Fairness in Federated Learning: Trends, Challenges, and Opportunities\",\"authors\":\"Noorain Mukhtiar,&nbsp;Adnan Mahmood,&nbsp;Quan Z. Sheng\",\"doi\":\"10.1002/aisy.202400836\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>At the intersection of the cutting-edge technologies and privacy concerns, federated learning (FL) with its distributed architecture, stands at the forefront in a bid to facilitate collaborative model training across multiple clients while preserving data privacy. However, the applicability of FL systems is hindered by fairness concerns arising from numerous sources of heterogeneity that can result in biases and undermine a system's effectiveness, with skewed predictions, reduced accuracy, and inefficient model convergence. This survey thus explores the diverse sources of bias, including but not limited to, data, client, and model biases, and thoroughly discusses the strengths and limitations inherited within the array of the state-of-the-art techniques utilized in the literature to mitigate such disparities in the FL training process. A comprehensive overview of the several notions, theoretical underpinnings, and technical aspects associated with fairness and their adoption in FL-based multidisciplinary environments are delineated. Furthermore, the salient evaluation metrics leveraged to measure fairness quantitatively are examined. Finally, exciting open research directions that have the potential to drive future advancements in achieving fairer FL frameworks, in turn, offering a strong foundation for future research in this pivotal area are envisaged.</p>\",\"PeriodicalId\":93858,\"journal\":{\"name\":\"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)\",\"volume\":\"7 6\",\"pages\":\"\"},\"PeriodicalIF\":6.1000,\"publicationDate\":\"2025-04-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aisy.202400836\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://advanced.onlinelibrary.wiley.com/doi/10.1002/aisy.202400836\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","FirstCategoryId":"1085","ListUrlMain":"https://advanced.onlinelibrary.wiley.com/doi/10.1002/aisy.202400836","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在尖端技术和隐私问题的交叉点上,联邦学习(FL)及其分布式架构站在最前沿,以促进跨多个客户端的协作模型培训,同时保护数据隐私。然而,FL系统的适用性受到来自众多异质性来源的公平性问题的阻碍,这些异质性可能导致偏差并破坏系统的有效性,导致预测偏差、准确性降低和模型收敛效率低下。因此,本调查探讨了偏见的各种来源,包括但不限于数据、客户和模型偏见,并深入讨论了文献中用于减轻FL培训过程中这种差异的最先进技术阵列中继承的优势和局限性。全面概述了与公平相关的几个概念、理论基础和技术方面,以及它们在基于fl的多学科环境中的采用。此外,重要的评估指标杠杆衡量公平定量检查。最后,展望了令人兴奋的开放研究方向,这些方向有可能推动实现更公平的FL框架的未来进步,从而为这一关键领域的未来研究提供坚实的基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Fairness in Federated Learning: Trends, Challenges, and Opportunities

Fairness in Federated Learning: Trends, Challenges, and Opportunities

Fairness in Federated Learning: Trends, Challenges, and Opportunities

Fairness in Federated Learning: Trends, Challenges, and Opportunities

At the intersection of the cutting-edge technologies and privacy concerns, federated learning (FL) with its distributed architecture, stands at the forefront in a bid to facilitate collaborative model training across multiple clients while preserving data privacy. However, the applicability of FL systems is hindered by fairness concerns arising from numerous sources of heterogeneity that can result in biases and undermine a system's effectiveness, with skewed predictions, reduced accuracy, and inefficient model convergence. This survey thus explores the diverse sources of bias, including but not limited to, data, client, and model biases, and thoroughly discusses the strengths and limitations inherited within the array of the state-of-the-art techniques utilized in the literature to mitigate such disparities in the FL training process. A comprehensive overview of the several notions, theoretical underpinnings, and technical aspects associated with fairness and their adoption in FL-based multidisciplinary environments are delineated. Furthermore, the salient evaluation metrics leveraged to measure fairness quantitatively are examined. Finally, exciting open research directions that have the potential to drive future advancements in achieving fairer FL frameworks, in turn, offering a strong foundation for future research in this pivotal area are envisaged.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.30
自引率
0.00%
发文量
0
审稿时长
4 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信