{"title":"基于自动编码器的分散联邦学习,实现高效通信","authors":"Abdul Wahab Mamond, Majid Kundroo, Taehong Kim","doi":"10.1016/j.comnet.2025.111676","DOIUrl":null,"url":null,"abstract":"<div><div>Decentralized federated learning (DFL) has emerged as a solution for traditional federated learning’s limitations, such as network bottlenecks and single-point failure, by enabling direct communication between nodes and eliminating the reliance on a central server. However, DFL still encounters challenges like increased communication costs as the number of participating nodes increases, amplifying the need for efficient compression techniques. Moreover, the increasing complexity of models, including vision, language, and generative models (e.g., GPT), further underscores this necessity due to their large parameter sizes. To address the communication cost-related issues in DFL, this study introduces Autoencoder-based Decentralized Federated Learning (AEDFL), which leverages autoencoders to compress model updates before transmission, allowing them to be reconstructed at the receiving end with high fidelity and minimal loss of accuracy. We conduct comprehensive experiments using two models, SqueezeNet and DenseNet, on three benchmark datasets: CIFAR-10 (under both IID and non-IID settings), FashionMNIST, and CIFAR-100. The results demonstrate that AEDFL achieves up to 122x compression with negligible accuracy degradation, showcasing its effectiveness in balancing communication efficiency and model performance across varying model sizes and dataset complexities.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"272 ","pages":"Article 111676"},"PeriodicalIF":4.6000,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Autoencoder-based decentralized federated learning for efficient communication\",\"authors\":\"Abdul Wahab Mamond, Majid Kundroo, Taehong Kim\",\"doi\":\"10.1016/j.comnet.2025.111676\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Decentralized federated learning (DFL) has emerged as a solution for traditional federated learning’s limitations, such as network bottlenecks and single-point failure, by enabling direct communication between nodes and eliminating the reliance on a central server. However, DFL still encounters challenges like increased communication costs as the number of participating nodes increases, amplifying the need for efficient compression techniques. Moreover, the increasing complexity of models, including vision, language, and generative models (e.g., GPT), further underscores this necessity due to their large parameter sizes. To address the communication cost-related issues in DFL, this study introduces Autoencoder-based Decentralized Federated Learning (AEDFL), which leverages autoencoders to compress model updates before transmission, allowing them to be reconstructed at the receiving end with high fidelity and minimal loss of accuracy. We conduct comprehensive experiments using two models, SqueezeNet and DenseNet, on three benchmark datasets: CIFAR-10 (under both IID and non-IID settings), FashionMNIST, and CIFAR-100. The results demonstrate that AEDFL achieves up to 122x compression with negligible accuracy degradation, showcasing its effectiveness in balancing communication efficiency and model performance across varying model sizes and dataset complexities.</div></div>\",\"PeriodicalId\":50637,\"journal\":{\"name\":\"Computer Networks\",\"volume\":\"272 \",\"pages\":\"Article 111676\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389128625006437\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625006437","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Autoencoder-based decentralized federated learning for efficient communication
Decentralized federated learning (DFL) has emerged as a solution for traditional federated learning’s limitations, such as network bottlenecks and single-point failure, by enabling direct communication between nodes and eliminating the reliance on a central server. However, DFL still encounters challenges like increased communication costs as the number of participating nodes increases, amplifying the need for efficient compression techniques. Moreover, the increasing complexity of models, including vision, language, and generative models (e.g., GPT), further underscores this necessity due to their large parameter sizes. To address the communication cost-related issues in DFL, this study introduces Autoencoder-based Decentralized Federated Learning (AEDFL), which leverages autoencoders to compress model updates before transmission, allowing them to be reconstructed at the receiving end with high fidelity and minimal loss of accuracy. We conduct comprehensive experiments using two models, SqueezeNet and DenseNet, on three benchmark datasets: CIFAR-10 (under both IID and non-IID settings), FashionMNIST, and CIFAR-100. The results demonstrate that AEDFL achieves up to 122x compression with negligible accuracy degradation, showcasing its effectiveness in balancing communication efficiency and model performance across varying model sizes and dataset complexities.
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.