Decentralized Personalization for Federated Medical Image Segmentation via Gossip Contrastive Mutual Learning

Jingyun Chen;Yading Yuan
{"title":"Decentralized Personalization for Federated Medical Image Segmentation via Gossip Contrastive Mutual Learning","authors":"Jingyun Chen;Yading Yuan","doi":"10.1109/TMI.2025.3549292","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) presents a promising avenue for collaborative model training among medical centers, facilitating knowledge exchange without compromising data privacy. However, vanilla FL is prone to server failures and rarely achieves optimal performance on all participating sites due to heterogeneous data distributions among them. To overcome these challenges, we propose Gossip Contrastive Mutual Learning (GCML), a unified framework to optimize personalized models in a decentralized environment, where Gossip Protocol is employed for flexible and robust peer-to-peer communication. To make efficient and reliable knowledge exchange in each communication without the global knowledge across all the sites, we introduce deep contrast mutual learning (DCML), a simple yet effective scheme to encourage knowledge transfer between the incoming and local models through collaborative training on local data. By integrating DCML with other efforts to optimize site-specific models by leveraging useful information from peers, we evaluated the performance and efficiency of the proposed method on three publicly available datasets with different segmentation tasks. Our extensive experimental results show that the proposed GCML framework outperformed both centralized and decentralized FL methods with significantly reduced communication overhead, indicating its potential for real-world deployment. Upon the acceptance of manuscript, the code will be available at: <uri>https://github.com/</uri> CUMC-Yuan-Lab/GCML","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2768-2783"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10916683/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) presents a promising avenue for collaborative model training among medical centers, facilitating knowledge exchange without compromising data privacy. However, vanilla FL is prone to server failures and rarely achieves optimal performance on all participating sites due to heterogeneous data distributions among them. To overcome these challenges, we propose Gossip Contrastive Mutual Learning (GCML), a unified framework to optimize personalized models in a decentralized environment, where Gossip Protocol is employed for flexible and robust peer-to-peer communication. To make efficient and reliable knowledge exchange in each communication without the global knowledge across all the sites, we introduce deep contrast mutual learning (DCML), a simple yet effective scheme to encourage knowledge transfer between the incoming and local models through collaborative training on local data. By integrating DCML with other efforts to optimize site-specific models by leveraging useful information from peers, we evaluated the performance and efficiency of the proposed method on three publicly available datasets with different segmentation tasks. Our extensive experimental results show that the proposed GCML framework outperformed both centralized and decentralized FL methods with significantly reduced communication overhead, indicating its potential for real-world deployment. Upon the acceptance of manuscript, the code will be available at: https://github.com/ CUMC-Yuan-Lab/GCML
基于八卦对比互学习的联邦医学图像分割分散个性化
联邦学习(FL)为医疗中心之间的协作模型培训提供了一个很有前途的途径,在不损害数据隐私的情况下促进知识交换。然而,香草FL容易出现服务器故障,并且由于它们之间的异构数据分布,很少在所有参与站点上实现最佳性能。为了克服这些挑战,我们提出了八卦对比相互学习(GCML),这是一个统一的框架,用于在分散的环境中优化个性化模型,其中八卦协议用于灵活和健壮的点对点通信。为了在不涉及所有站点的全局知识的情况下在每次通信中进行高效可靠的知识交换,我们引入了深度对比相互学习(DCML),这是一种简单而有效的方案,通过对本地数据的协作训练来鼓励传入模型和本地模型之间的知识转移。通过利用来自同行的有用信息,将DCML与其他优化站点特定模型的努力相结合,我们评估了该方法在三个具有不同分割任务的公开数据集上的性能和效率。我们广泛的实验结果表明,所提出的GCML框架优于集中式和分散式FL方法,并且显著降低了通信开销,表明其具有实际部署的潜力。稿件通过后,代码将在:https://github.com/ CUMC-Yuan-Lab/GCML上发布
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信