{"title":"Towards collaborative fair federated distillation","authors":"","doi":"10.1016/j.engappai.2024.109216","DOIUrl":null,"url":null,"abstract":"<div><p>Federated Learning (FL), despite its success as a privacy-preserving distributed machine learning framework, faces significant bottlenecks, including high communication costs, heterogeneity issues, and unfairness, throughout various phases of the training process. Federated Distillation (FD) has recently emerged as a promising solution to tackle heterogeneity and enhance communication efficiency in FL. In addition, significant effort has been put forth in recent years to support various notions of fairness associated with the FL ecosystem, such as Collaborative Fairness, which seeks to ensure the fair distribution of rewards among participants based on their level of contribution. Although several works have been done to promote collaborative fairness in FL, they are mostly well-suited for FL algorithms based on model updates or gradient sharing during the training procedure. Guaranteeing collaborative fairness in FD methods is still completely unexplored where it can have potential applications in communication engineering, healthcare, banking, finance, and social networks in large-scale software, etc., as most Knowledge Distillation (KD) based FL algorithms promote either identical global logits or identical global model updates sharing among the clients after the distillation process. This is unfair because severely underperforming participants can gain access to the knowledge of all high-performing participants while contributing almost nothing to the learning process. In this paper, we propose a novel Collaborative Fair Federated Distillation (CFD) algorithm with a view to exploring collaborative fairness in KD-based Federated Learning strategies. We leverage the reputation mechanism to rank the participants in order of their contributions and appropriately distribute logits among them while maintaining competitive performance. Extensive experiments on benchmark datasets validate the efficacy of our proposed method as well as the practicality of the proposed logit-based reward scheme.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624013745","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL), despite its success as a privacy-preserving distributed machine learning framework, faces significant bottlenecks, including high communication costs, heterogeneity issues, and unfairness, throughout various phases of the training process. Federated Distillation (FD) has recently emerged as a promising solution to tackle heterogeneity and enhance communication efficiency in FL. In addition, significant effort has been put forth in recent years to support various notions of fairness associated with the FL ecosystem, such as Collaborative Fairness, which seeks to ensure the fair distribution of rewards among participants based on their level of contribution. Although several works have been done to promote collaborative fairness in FL, they are mostly well-suited for FL algorithms based on model updates or gradient sharing during the training procedure. Guaranteeing collaborative fairness in FD methods is still completely unexplored where it can have potential applications in communication engineering, healthcare, banking, finance, and social networks in large-scale software, etc., as most Knowledge Distillation (KD) based FL algorithms promote either identical global logits or identical global model updates sharing among the clients after the distillation process. This is unfair because severely underperforming participants can gain access to the knowledge of all high-performing participants while contributing almost nothing to the learning process. In this paper, we propose a novel Collaborative Fair Federated Distillation (CFD) algorithm with a view to exploring collaborative fairness in KD-based Federated Learning strategies. We leverage the reputation mechanism to rank the participants in order of their contributions and appropriately distribute logits among them while maintaining competitive performance. Extensive experiments on benchmark datasets validate the efficacy of our proposed method as well as the practicality of the proposed logit-based reward scheme.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.