ACFL: Communication-Efficient adversarial contrastive federated learning for medical image segmentation

IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
{"title":"ACFL: Communication-Efficient adversarial contrastive federated learning for medical image segmentation","authors":"","doi":"10.1016/j.knosys.2024.112516","DOIUrl":null,"url":null,"abstract":"<div><p>Federated learning is a popular machine learning paradigm that achieves decentralized model training on distributed devices, ensuring data decentralization, privacy protection, and enhanced overall learning effectiveness. However, the non-independence and identically distributed (i.e., non-IID) nature of medical data across different institutes has remained a significant challenge in federated learning. Current research has mainly focused on addressing label distribution skew and classification scenarios, overlooking the feature distribution skew settings and more challenging semantic segmentation scenarios. In this paper, we present communication-efficient Adversarial Contrastive Federated Learning (ACFL) for the prevalent feature distribution skew scenarios in medical semantic segmentation. The core idea of the approach is to enhance model generalization by learning each client’s domain-invariant features through adversarial training. Specifically, we introduce a global discriminator that, through contrastive learning in the server, trains to differentiate feature representations from various clients. Meanwhile, the clients learn common domain-invariant features through prototype contrastive learning and global discriminator training. Furthermore, by utilizing Gaussian mixture models for virtual feature sampling on the server, compared to transmitting raw features, the ACFL method possesses the additional advantages of efficient communication and privacy protection. Extensive experiments on two medical semantic segmentation datasets and extension on three classification datasets validated the superiority of the proposed method.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S095070512401150X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning is a popular machine learning paradigm that achieves decentralized model training on distributed devices, ensuring data decentralization, privacy protection, and enhanced overall learning effectiveness. However, the non-independence and identically distributed (i.e., non-IID) nature of medical data across different institutes has remained a significant challenge in federated learning. Current research has mainly focused on addressing label distribution skew and classification scenarios, overlooking the feature distribution skew settings and more challenging semantic segmentation scenarios. In this paper, we present communication-efficient Adversarial Contrastive Federated Learning (ACFL) for the prevalent feature distribution skew scenarios in medical semantic segmentation. The core idea of the approach is to enhance model generalization by learning each client’s domain-invariant features through adversarial training. Specifically, we introduce a global discriminator that, through contrastive learning in the server, trains to differentiate feature representations from various clients. Meanwhile, the clients learn common domain-invariant features through prototype contrastive learning and global discriminator training. Furthermore, by utilizing Gaussian mixture models for virtual feature sampling on the server, compared to transmitting raw features, the ACFL method possesses the additional advantages of efficient communication and privacy protection. Extensive experiments on two medical semantic segmentation datasets and extension on three classification datasets validated the superiority of the proposed method.

ACFL:用于医学图像分割的通信高效对抗联合学习
联盟学习是一种流行的机器学习范式,它能在分布式设备上实现分散的模型训练,确保数据分散、隐私保护并提高整体学习效率。然而,不同机构间医疗数据的非独立性和同分布(即非 IID)特性一直是联盟学习面临的重大挑战。目前的研究主要集中在解决标签分布倾斜和分类场景,忽略了特征分布倾斜设置和更具挑战性的语义分割场景。在本文中,我们针对医学语义分割中普遍存在的特征分布倾斜场景,提出了具有通信效率的对抗式联合学习(ACFL)。该方法的核心思想是通过对抗训练学习每个客户的领域不变特征,从而增强模型的泛化能力。具体来说,我们引入了一个全局鉴别器,通过服务器中的对比学习,训练鉴别来自不同客户端的特征表征。同时,客户端通过原型对比学习和全局判别器训练来学习共同的领域不变特征。此外,通过在服务器上利用高斯混合模型进行虚拟特征采样,与传输原始特征相比,ACFL 方法还具有高效通信和隐私保护的额外优势。在两个医学语义分割数据集上的广泛实验以及在三个分类数据集上的扩展验证了所提方法的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信