DSFedCon: Dynamic Sparse Federated Contrastive Learning for Data-Driven Intelligent Systems

IF 10.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhengming Li;Jiahui Chen;Peifeng Zhang;Huiwu Huang;Guanbin Li
{"title":"DSFedCon: Dynamic Sparse Federated Contrastive Learning for Data-Driven Intelligent Systems","authors":"Zhengming Li;Jiahui Chen;Peifeng Zhang;Huiwu Huang;Guanbin Li","doi":"10.1109/TNNLS.2024.3349400","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) makes it possible for multiple clients to collaboratively train a machine-learning model through communicating models instead of data, reducing privacy risk. Thus, FL is more suitable for processing data security and privacy for intelligent systems and applications. Unfortunately, there are several challenges in FL, such as the low training accuracy for nonindependent and identically distributed (non-IID) data and the high cost of computation and communication. Considering these, we propose a novel FL framework named dynamic sparse federated contrastive learning (DSFedCon). DSFedCon combines FL with dynamic sparse (DSR) training of network pruning and contrastive learning to improve model performance and reduce computation costs and communication costs. We analyze DSFedCon from the perspective of accuracy, communication, and security, demonstrating it is communication-efficient and safe. To give a practical evaluation for non-IID data training, we perform experiments and comparisons on the MNIST, CIFAR-10, and CIFAR-100 datasets with different parameters of Dirichlet distribution. Results indicate that DSFedCon can get higher accuracy and better communication cost than other state-of-the-art methods in these two datasets. More precisely, we show that DSFedCon has a 4.67-time speedup of communication rounds in MNIST, a 7.5-time speedup of communication rounds in CIFAR-10, and an 18.33-time speedup of communication rounds in CIFAR-100 dataset while achieving the same training accuracy.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 2","pages":"3343-3355"},"PeriodicalIF":10.2000,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10415051/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning (FL) makes it possible for multiple clients to collaboratively train a machine-learning model through communicating models instead of data, reducing privacy risk. Thus, FL is more suitable for processing data security and privacy for intelligent systems and applications. Unfortunately, there are several challenges in FL, such as the low training accuracy for nonindependent and identically distributed (non-IID) data and the high cost of computation and communication. Considering these, we propose a novel FL framework named dynamic sparse federated contrastive learning (DSFedCon). DSFedCon combines FL with dynamic sparse (DSR) training of network pruning and contrastive learning to improve model performance and reduce computation costs and communication costs. We analyze DSFedCon from the perspective of accuracy, communication, and security, demonstrating it is communication-efficient and safe. To give a practical evaluation for non-IID data training, we perform experiments and comparisons on the MNIST, CIFAR-10, and CIFAR-100 datasets with different parameters of Dirichlet distribution. Results indicate that DSFedCon can get higher accuracy and better communication cost than other state-of-the-art methods in these two datasets. More precisely, we show that DSFedCon has a 4.67-time speedup of communication rounds in MNIST, a 7.5-time speedup of communication rounds in CIFAR-10, and an 18.33-time speedup of communication rounds in CIFAR-100 dataset while achieving the same training accuracy.
DSFedCon:面向数据驱动智能系统的动态稀疏联合对比学习。
联合学习(FL)使多个客户端通过交流模型而不是数据来协作训练机器学习模型成为可能,从而降低了隐私风险。因此,FL 更适合处理智能系统和应用的数据安全和隐私。遗憾的是,FL 存在一些挑战,如非独立和同分布(non-IID)数据的训练精度低,计算和通信成本高。有鉴于此,我们提出了一种名为动态稀疏联合对比学习(DSFedCon)的新型 FL 框架。DSFedCon 将 FL 与网络剪枝和对比学习的动态稀疏(DSR)训练相结合,以提高模型性能并降低计算成本和通信成本。我们从准确性、通信和安全性的角度分析了 DSFedCon,证明它具有通信效率和安全性。为了对非 IID 数据训练进行实际评估,我们在具有不同 Dirichlet 分布参数的 MNIST、CIFAR-10 和 CIFAR-100 数据集上进行了实验和比较。结果表明,在这两个数据集上,DSFedCon 比其他最先进的方法能获得更高的准确率和更好的通信成本。更确切地说,我们发现 DSFedCon 在 MNIST 数据集中的通信轮次速度提高了 4.67 倍,在 CIFAR-10 数据集中的通信轮次速度提高了 7.5 倍,在 CIFAR-100 数据集中的通信轮次速度提高了 18.33 倍,而训练精度不变。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信