{"title":"Contrastive and Non-Contrastive Strategies for Federated Self-Supervised Representation Learning and Deep Clustering","authors":"Runxuan Miao;Erdem Koyuncu","doi":"10.1109/JSTSP.2024.3461311","DOIUrl":null,"url":null,"abstract":"We investigate federated self-supervised representation learning (FedSSRL) and federated clustering (FedCl), aiming to derive low-dimensional representations of datasets distributed across multiple clients, potentially in a heterogeneous manner. Our proposed solutions for both FedSSRL and FedCl involves a comparative analysis from a broad learning context. In particular, we show that a two-stage model, beginning with representation learning and followed by clustering, is an effective learning strategy for both tasks. Notably, integrating a contrastive loss as regularizer significantly boosts performance, even if the task is representation learning. Moreover, for FedCl, a contrastive loss is most effective in both stages, whereas FedSSRL benefits more from a non-contrastive loss. These findings are corroborated by extensive experiments on various image datasets.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"18 6","pages":"1070-1084"},"PeriodicalIF":8.7000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10683880/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
We investigate federated self-supervised representation learning (FedSSRL) and federated clustering (FedCl), aiming to derive low-dimensional representations of datasets distributed across multiple clients, potentially in a heterogeneous manner. Our proposed solutions for both FedSSRL and FedCl involves a comparative analysis from a broad learning context. In particular, we show that a two-stage model, beginning with representation learning and followed by clustering, is an effective learning strategy for both tasks. Notably, integrating a contrastive loss as regularizer significantly boosts performance, even if the task is representation learning. Moreover, for FedCl, a contrastive loss is most effective in both stages, whereas FedSSRL benefits more from a non-contrastive loss. These findings are corroborated by extensive experiments on various image datasets.
期刊介绍:
The IEEE Journal of Selected Topics in Signal Processing (JSTSP) focuses on the Field of Interest of the IEEE Signal Processing Society, which encompasses the theory and application of various signal processing techniques. These techniques include filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals using digital or analog devices. The term "signal" covers a wide range of data types, including audio, video, speech, image, communication, geophysical, sonar, radar, medical, musical, and others.
The journal format allows for in-depth exploration of signal processing topics, enabling the Society to cover both established and emerging areas. This includes interdisciplinary fields such as biomedical engineering and language processing, as well as areas not traditionally associated with engineering.