{"title":"ACFL:用于医学图像分割的通信高效对抗联合学习","authors":"","doi":"10.1016/j.knosys.2024.112516","DOIUrl":null,"url":null,"abstract":"<div><p>Federated learning is a popular machine learning paradigm that achieves decentralized model training on distributed devices, ensuring data decentralization, privacy protection, and enhanced overall learning effectiveness. However, the non-independence and identically distributed (i.e., non-IID) nature of medical data across different institutes has remained a significant challenge in federated learning. Current research has mainly focused on addressing label distribution skew and classification scenarios, overlooking the feature distribution skew settings and more challenging semantic segmentation scenarios. In this paper, we present communication-efficient Adversarial Contrastive Federated Learning (ACFL) for the prevalent feature distribution skew scenarios in medical semantic segmentation. The core idea of the approach is to enhance model generalization by learning each client’s domain-invariant features through adversarial training. Specifically, we introduce a global discriminator that, through contrastive learning in the server, trains to differentiate feature representations from various clients. Meanwhile, the clients learn common domain-invariant features through prototype contrastive learning and global discriminator training. Furthermore, by utilizing Gaussian mixture models for virtual feature sampling on the server, compared to transmitting raw features, the ACFL method possesses the additional advantages of efficient communication and privacy protection. Extensive experiments on two medical semantic segmentation datasets and extension on three classification datasets validated the superiority of the proposed method.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ACFL: Communication-Efficient adversarial contrastive federated learning for medical image segmentation\",\"authors\":\"\",\"doi\":\"10.1016/j.knosys.2024.112516\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Federated learning is a popular machine learning paradigm that achieves decentralized model training on distributed devices, ensuring data decentralization, privacy protection, and enhanced overall learning effectiveness. However, the non-independence and identically distributed (i.e., non-IID) nature of medical data across different institutes has remained a significant challenge in federated learning. Current research has mainly focused on addressing label distribution skew and classification scenarios, overlooking the feature distribution skew settings and more challenging semantic segmentation scenarios. In this paper, we present communication-efficient Adversarial Contrastive Federated Learning (ACFL) for the prevalent feature distribution skew scenarios in medical semantic segmentation. The core idea of the approach is to enhance model generalization by learning each client’s domain-invariant features through adversarial training. Specifically, we introduce a global discriminator that, through contrastive learning in the server, trains to differentiate feature representations from various clients. Meanwhile, the clients learn common domain-invariant features through prototype contrastive learning and global discriminator training. Furthermore, by utilizing Gaussian mixture models for virtual feature sampling on the server, compared to transmitting raw features, the ACFL method possesses the additional advantages of efficient communication and privacy protection. Extensive experiments on two medical semantic segmentation datasets and extension on three classification datasets validated the superiority of the proposed method.</p></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S095070512401150X\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S095070512401150X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
ACFL: Communication-Efficient adversarial contrastive federated learning for medical image segmentation
Federated learning is a popular machine learning paradigm that achieves decentralized model training on distributed devices, ensuring data decentralization, privacy protection, and enhanced overall learning effectiveness. However, the non-independence and identically distributed (i.e., non-IID) nature of medical data across different institutes has remained a significant challenge in federated learning. Current research has mainly focused on addressing label distribution skew and classification scenarios, overlooking the feature distribution skew settings and more challenging semantic segmentation scenarios. In this paper, we present communication-efficient Adversarial Contrastive Federated Learning (ACFL) for the prevalent feature distribution skew scenarios in medical semantic segmentation. The core idea of the approach is to enhance model generalization by learning each client’s domain-invariant features through adversarial training. Specifically, we introduce a global discriminator that, through contrastive learning in the server, trains to differentiate feature representations from various clients. Meanwhile, the clients learn common domain-invariant features through prototype contrastive learning and global discriminator training. Furthermore, by utilizing Gaussian mixture models for virtual feature sampling on the server, compared to transmitting raw features, the ACFL method possesses the additional advantages of efficient communication and privacy protection. Extensive experiments on two medical semantic segmentation datasets and extension on three classification datasets validated the superiority of the proposed method.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.