Edvin Listo Zec, Tom Hagander, Eric Ihre-Thomason, Sarunas Girdzijauskas
{"title":"论分布式转变下分散深度学习中相似度指标的影响","authors":"Edvin Listo Zec, Tom Hagander, Eric Ihre-Thomason, Sarunas Girdzijauskas","doi":"arxiv-2409.10720","DOIUrl":null,"url":null,"abstract":"Decentralized Learning (DL) enables privacy-preserving collaboration among\norganizations or users to enhance the performance of local deep learning\nmodels. However, model aggregation becomes challenging when client data is\nheterogeneous, and identifying compatible collaborators without direct data\nexchange remains a pressing issue. In this paper, we investigate the\neffectiveness of various similarity metrics in DL for identifying peers for\nmodel merging, conducting an empirical analysis across multiple datasets with\ndistribution shifts. Our research provides insights into the performance of\nthese metrics, examining their role in facilitating effective collaboration. By\nexploring the strengths and limitations of these metrics, we contribute to the\ndevelopment of robust DL methods.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the effects of similarity metrics in decentralized deep learning under distributional shift\",\"authors\":\"Edvin Listo Zec, Tom Hagander, Eric Ihre-Thomason, Sarunas Girdzijauskas\",\"doi\":\"arxiv-2409.10720\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Decentralized Learning (DL) enables privacy-preserving collaboration among\\norganizations or users to enhance the performance of local deep learning\\nmodels. However, model aggregation becomes challenging when client data is\\nheterogeneous, and identifying compatible collaborators without direct data\\nexchange remains a pressing issue. In this paper, we investigate the\\neffectiveness of various similarity metrics in DL for identifying peers for\\nmodel merging, conducting an empirical analysis across multiple datasets with\\ndistribution shifts. Our research provides insights into the performance of\\nthese metrics, examining their role in facilitating effective collaboration. By\\nexploring the strengths and limitations of these metrics, we contribute to the\\ndevelopment of robust DL methods.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":\"27 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10720\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10720","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On the effects of similarity metrics in decentralized deep learning under distributional shift
Decentralized Learning (DL) enables privacy-preserving collaboration among
organizations or users to enhance the performance of local deep learning
models. However, model aggregation becomes challenging when client data is
heterogeneous, and identifying compatible collaborators without direct data
exchange remains a pressing issue. In this paper, we investigate the
effectiveness of various similarity metrics in DL for identifying peers for
model merging, conducting an empirical analysis across multiple datasets with
distribution shifts. Our research provides insights into the performance of
these metrics, examining their role in facilitating effective collaboration. By
exploring the strengths and limitations of these metrics, we contribute to the
development of robust DL methods.