Pablo Fernández-Piñeiro , Manuel Fernández-Veiga , Rebeca P. Díaz-Redondo , Ana Fernández-Vilas , Martín González Soto
{"title":"面向基于原型的分散学习的有效压缩和通信","authors":"Pablo Fernández-Piñeiro , Manuel Fernández-Veiga , Rebeca P. Díaz-Redondo , Ana Fernández-Vilas , Martín González Soto","doi":"10.1016/j.asoc.2025.113270","DOIUrl":null,"url":null,"abstract":"<div><div>In prototype-based federated learning, the exchange of model parameters between clients and the master server is replaced by transmission of prototypes or quantized versions of the data samples to the aggregation server. A fully decentralized deployment of prototype-based learning, without a central aggregator of prototypes, is more robust upon network failures and reacts faster to changes in the statistical distribution of the data, suggesting potential advantages and quick adaptation in dynamic learning tasks, e.g., when the data sources are IoT devices or when data is non-iid. In this paper, we face the challenge of designing an efficient prototype-based decentralized learning network by reducing the overheads in communication and computation. This allows enhancing the scalability of the global system, specially for IoT settings with resource-limited devices. First, we compress the prototype size by applying a clustering algorithm. After that, we filter the prototypes to be disseminate using an information-theoretic measure to share only relevant models or models that provide new knowledge to their neighbors. Then, we define a parallel gossip algorithm to disseminate these models within the learning network. Finally, we define a suitable scheduler able to manage the set of prototypes received to optimize the aggregation phase. In order to validate our proposal we present an analysis of the parallel gossip algorithm regarding the age-of-information (AoI). Our experimental results show the communications load can be substantially reduced without decreasing the convergence rate of the learning algorithm.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"178 ","pages":"Article 113270"},"PeriodicalIF":7.2000,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards efficient compression and communication for prototype-based decentralized learning\",\"authors\":\"Pablo Fernández-Piñeiro , Manuel Fernández-Veiga , Rebeca P. Díaz-Redondo , Ana Fernández-Vilas , Martín González Soto\",\"doi\":\"10.1016/j.asoc.2025.113270\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In prototype-based federated learning, the exchange of model parameters between clients and the master server is replaced by transmission of prototypes or quantized versions of the data samples to the aggregation server. A fully decentralized deployment of prototype-based learning, without a central aggregator of prototypes, is more robust upon network failures and reacts faster to changes in the statistical distribution of the data, suggesting potential advantages and quick adaptation in dynamic learning tasks, e.g., when the data sources are IoT devices or when data is non-iid. In this paper, we face the challenge of designing an efficient prototype-based decentralized learning network by reducing the overheads in communication and computation. This allows enhancing the scalability of the global system, specially for IoT settings with resource-limited devices. First, we compress the prototype size by applying a clustering algorithm. After that, we filter the prototypes to be disseminate using an information-theoretic measure to share only relevant models or models that provide new knowledge to their neighbors. Then, we define a parallel gossip algorithm to disseminate these models within the learning network. Finally, we define a suitable scheduler able to manage the set of prototypes received to optimize the aggregation phase. In order to validate our proposal we present an analysis of the parallel gossip algorithm regarding the age-of-information (AoI). Our experimental results show the communications load can be substantially reduced without decreasing the convergence rate of the learning algorithm.</div></div>\",\"PeriodicalId\":50737,\"journal\":{\"name\":\"Applied Soft Computing\",\"volume\":\"178 \",\"pages\":\"Article 113270\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-05-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Soft Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1568494625005812\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625005812","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Towards efficient compression and communication for prototype-based decentralized learning
In prototype-based federated learning, the exchange of model parameters between clients and the master server is replaced by transmission of prototypes or quantized versions of the data samples to the aggregation server. A fully decentralized deployment of prototype-based learning, without a central aggregator of prototypes, is more robust upon network failures and reacts faster to changes in the statistical distribution of the data, suggesting potential advantages and quick adaptation in dynamic learning tasks, e.g., when the data sources are IoT devices or when data is non-iid. In this paper, we face the challenge of designing an efficient prototype-based decentralized learning network by reducing the overheads in communication and computation. This allows enhancing the scalability of the global system, specially for IoT settings with resource-limited devices. First, we compress the prototype size by applying a clustering algorithm. After that, we filter the prototypes to be disseminate using an information-theoretic measure to share only relevant models or models that provide new knowledge to their neighbors. Then, we define a parallel gossip algorithm to disseminate these models within the learning network. Finally, we define a suitable scheduler able to manage the set of prototypes received to optimize the aggregation phase. In order to validate our proposal we present an analysis of the parallel gossip algorithm regarding the age-of-information (AoI). Our experimental results show the communications load can be substantially reduced without decreasing the convergence rate of the learning algorithm.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.