KL-FedDis:一种利用非iid数据的Kullback-Leibler散度实现分布信息共享的联邦学习方法

Md. Rahad , Ruhan Shabab , Mohd. Sultan Ahammad , Md. Mahfuz Reza , Amit Karmaker , Md. Abir Hossain
{"title":"KL-FedDis:一种利用非iid数据的Kullback-Leibler散度实现分布信息共享的联邦学习方法","authors":"Md. Rahad ,&nbsp;Ruhan Shabab ,&nbsp;Mohd. Sultan Ahammad ,&nbsp;Md. Mahfuz Reza ,&nbsp;Amit Karmaker ,&nbsp;Md. Abir Hossain","doi":"10.1016/j.neuri.2024.100182","DOIUrl":null,"url":null,"abstract":"<div><div>Data Heterogeneity or Non-IID (non-independent and identically distributed) data identification is one of the prominent challenges in Federated Learning (FL). In Non-IID data, clients have their own local data, which may not be independently and identically distributed. This arises because clients involved in federated learning typically have their own unique, local datasets that vary significantly due to factors like geographical location, user behaviors, or specific contexts. Model divergence is another critical challenge where the local models trained on different clients, data may diverge significantly but making it difficult for the global model to converge. To identify the non-IID data, few federated learning models have been introduced as FedDis, FedProx and FedAvg, but their accuracy is too low. To address the clients Non-IID data along with ensuring privacy, federated learning emerged with appropriate distribution mechanism is an effective solution. In this paper, a modified FedDis learning method called KL-FedDis is proposed, which incorporates Kullback-Leibler (KL) divergence as the regularization technique. KL-FedDis improves accuracy and computation time over the FedDis and FedAvg technique by successfully maintaining the distribution information and encouraging improved collaboration among the local models by utilizing KL divergence.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 1","pages":"Article 100182"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"KL-FedDis: A federated learning approach with distribution information sharing using Kullback-Leibler divergence for non-IID data\",\"authors\":\"Md. Rahad ,&nbsp;Ruhan Shabab ,&nbsp;Mohd. Sultan Ahammad ,&nbsp;Md. Mahfuz Reza ,&nbsp;Amit Karmaker ,&nbsp;Md. Abir Hossain\",\"doi\":\"10.1016/j.neuri.2024.100182\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Data Heterogeneity or Non-IID (non-independent and identically distributed) data identification is one of the prominent challenges in Federated Learning (FL). In Non-IID data, clients have their own local data, which may not be independently and identically distributed. This arises because clients involved in federated learning typically have their own unique, local datasets that vary significantly due to factors like geographical location, user behaviors, or specific contexts. Model divergence is another critical challenge where the local models trained on different clients, data may diverge significantly but making it difficult for the global model to converge. To identify the non-IID data, few federated learning models have been introduced as FedDis, FedProx and FedAvg, but their accuracy is too low. To address the clients Non-IID data along with ensuring privacy, federated learning emerged with appropriate distribution mechanism is an effective solution. In this paper, a modified FedDis learning method called KL-FedDis is proposed, which incorporates Kullback-Leibler (KL) divergence as the regularization technique. KL-FedDis improves accuracy and computation time over the FedDis and FedAvg technique by successfully maintaining the distribution information and encouraging improved collaboration among the local models by utilizing KL divergence.</div></div>\",\"PeriodicalId\":74295,\"journal\":{\"name\":\"Neuroscience informatics\",\"volume\":\"5 1\",\"pages\":\"Article 100182\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neuroscience informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S277252862400027X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S277252862400027X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

数据异构或非iid(非独立和同分布)数据识别是联邦学习(FL)中的突出挑战之一。在非iid数据中,客户端有自己的本地数据,这些本地数据可能不是独立的、相同的分布。这是因为参与联邦学习的客户端通常有自己独特的本地数据集,这些数据集由于地理位置、用户行为或特定上下文等因素而变化很大。模型分歧是另一个关键挑战,在不同客户端上训练的局部模型,数据可能会显著分歧,但使全局模型难以收敛。为了识别非iid数据,已经引入了一些联邦学习模型,如FedDis、FedProx和fedag,但它们的准确率太低。为了在解决客户端非iid数据的同时确保隐私,采用适当的分发机制的联邦学习是一种有效的解决方案。本文提出了一种改进的fedis学习方法KL- fedis,该方法将Kullback-Leibler (KL)散度作为正则化技术。KL-FedDis通过利用KL散度成功地维护分布信息和促进局部模型之间的协作,提高了FedDis和fedag技术的准确性和计算时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
KL-FedDis: A federated learning approach with distribution information sharing using Kullback-Leibler divergence for non-IID data
Data Heterogeneity or Non-IID (non-independent and identically distributed) data identification is one of the prominent challenges in Federated Learning (FL). In Non-IID data, clients have their own local data, which may not be independently and identically distributed. This arises because clients involved in federated learning typically have their own unique, local datasets that vary significantly due to factors like geographical location, user behaviors, or specific contexts. Model divergence is another critical challenge where the local models trained on different clients, data may diverge significantly but making it difficult for the global model to converge. To identify the non-IID data, few federated learning models have been introduced as FedDis, FedProx and FedAvg, but their accuracy is too low. To address the clients Non-IID data along with ensuring privacy, federated learning emerged with appropriate distribution mechanism is an effective solution. In this paper, a modified FedDis learning method called KL-FedDis is proposed, which incorporates Kullback-Leibler (KL) divergence as the regularization technique. KL-FedDis improves accuracy and computation time over the FedDis and FedAvg technique by successfully maintaining the distribution information and encouraging improved collaboration among the local models by utilizing KL divergence.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neuroscience informatics
Neuroscience informatics Surgery, Radiology and Imaging, Information Systems, Neurology, Artificial Intelligence, Computer Science Applications, Signal Processing, Critical Care and Intensive Care Medicine, Health Informatics, Clinical Neurology, Pathology and Medical Technology
自引率
0.00%
发文量
0
审稿时长
57 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信