Continual and wisdom learning for federated learning: A comprehensive framework for robustness and debiasing

IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Saeed Iqbal , Xiaopin Zhong , Muhammad Attique Khan , Zongze Wu , Dina Abdulaziz AlHammadi , Weixiang Liu , Imran Arshad Choudhry
{"title":"Continual and wisdom learning for federated learning: A comprehensive framework for robustness and debiasing","authors":"Saeed Iqbal ,&nbsp;Xiaopin Zhong ,&nbsp;Muhammad Attique Khan ,&nbsp;Zongze Wu ,&nbsp;Dina Abdulaziz AlHammadi ,&nbsp;Weixiang Liu ,&nbsp;Imran Arshad Choudhry","doi":"10.1016/j.ipm.2025.104157","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) has transformed decentralized machine learning, however it remains has concerns with noisy labeled data, diverse clients, and sparse datasets, especially in delicate fields like healthcare. To address these issues, this study introduces a robust FL framework that integrates advanced Continual Learning (CL) and Wisdom Learning (WL) techniques. Elastic Weight Consolidation (EWC) prevents catastrophic forgetting by penalizing deviations from critical weights, while Progressive Neural Networks (PNN) leverage modular architectures with lateral connections to enable knowledge transfer across tasks and isolate client-specific biases. WL incorporates consensus-based aggregation, dynamic model distillation, and adaptive ensemble learning to enhance model robustness against noisy updates and biased data distributions. The framework is rigorously validated on benchmark medical imaging datasets, including ADNI, BraTS, PathMNIST, BreastMNIST, and ChestMNIST, demonstrating significant improvements in fairness metrics, with up to a 94.3% reduction in bias (Demographic Parity) and a 92.7% improvement in accuracy fairness (Accuracy Parity). These results establish the effectiveness of the proposed approach in achieving stable, equitable, and high-performing global models under challenging FL conditions characterized by dynamic client settings, label noise, and class imbalance.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 5","pages":"Article 104157"},"PeriodicalIF":7.4000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325000986","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) has transformed decentralized machine learning, however it remains has concerns with noisy labeled data, diverse clients, and sparse datasets, especially in delicate fields like healthcare. To address these issues, this study introduces a robust FL framework that integrates advanced Continual Learning (CL) and Wisdom Learning (WL) techniques. Elastic Weight Consolidation (EWC) prevents catastrophic forgetting by penalizing deviations from critical weights, while Progressive Neural Networks (PNN) leverage modular architectures with lateral connections to enable knowledge transfer across tasks and isolate client-specific biases. WL incorporates consensus-based aggregation, dynamic model distillation, and adaptive ensemble learning to enhance model robustness against noisy updates and biased data distributions. The framework is rigorously validated on benchmark medical imaging datasets, including ADNI, BraTS, PathMNIST, BreastMNIST, and ChestMNIST, demonstrating significant improvements in fairness metrics, with up to a 94.3% reduction in bias (Demographic Parity) and a 92.7% improvement in accuracy fairness (Accuracy Parity). These results establish the effectiveness of the proposed approach in achieving stable, equitable, and high-performing global models under challenging FL conditions characterized by dynamic client settings, label noise, and class imbalance.
联邦学习的持续和智慧学习:鲁棒性和去偏性的综合框架
联邦学习(FL)已经改变了去中心化的机器学习,但它仍然关注嘈杂的标记数据、多样化的客户端和稀疏的数据集,特别是在医疗保健等敏感领域。为了解决这些问题,本研究引入了一个强大的FL框架,它集成了先进的持续学习(CL)和智慧学习(WL)技术。弹性权重整合(EWC)通过惩罚偏离关键权重来防止灾难性遗忘,而渐进式神经网络(PNN)利用具有横向连接的模块化架构,实现跨任务的知识转移,并隔离客户特定的偏差。WL采用基于共识的聚合、动态模型蒸馏和自适应集成学习来增强模型对噪声更新和有偏差数据分布的鲁棒性。该框架在基准医学成像数据集(包括ADNI、BraTS、PathMNIST、BreastMNIST和ChestMNIST)上进行了严格验证,显示出公平性指标的显着改善,偏差减少了94.3%(人口平价),准确性公平性提高了92.7%(准确性平价)。这些结果证实了该方法在具有挑战性的FL条件下实现稳定、公平和高性能的全局模型的有效性,这些条件以动态客户端设置、标签噪声和类别不平衡为特征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Processing & Management
Information Processing & Management 工程技术-计算机:信息系统
CiteScore
17.00
自引率
11.60%
发文量
276
审稿时长
39 days
期刊介绍: Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing. We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信