Aymeric Capitaine, Etienne Boursier, Antoine Scheid, Eric Moulines, Michael I. Jordan, El-Mahdi El-Mhamdi, Alain Durmus
{"title":"合作学习中的解构","authors":"Aymeric Capitaine, Etienne Boursier, Antoine Scheid, Eric Moulines, Michael I. Jordan, El-Mahdi El-Mhamdi, Alain Durmus","doi":"arxiv-2407.14332","DOIUrl":null,"url":null,"abstract":"Collaborative learning offers a promising avenue for leveraging decentralized\ndata. However, collaboration in groups of strategic learners is not a given. In\nthis work, we consider strategic agents who wish to train a model together but\nhave sampling distributions of different quality. The collaboration is\norganized by a benevolent aggregator who gathers samples so as to maximize\ntotal welfare, but is unaware of data quality. This setting allows us to shed\nlight on the deleterious effect of adverse selection in collaborative learning.\nMore precisely, we demonstrate that when data quality indices are private, the\ncoalition may undergo a phenomenon known as unravelling, wherein it shrinks up\nto the point that it becomes empty or solely comprised of the worst agent. We\nshow how this issue can be addressed without making use of external transfers,\nby proposing a novel method inspired by probabilistic verification. This\napproach makes the grand coalition a Nash equilibrium with high probability\ndespite information asymmetry, thereby breaking unravelling.","PeriodicalId":501316,"journal":{"name":"arXiv - CS - Computer Science and Game Theory","volume":"13 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unravelling in Collaborative Learning\",\"authors\":\"Aymeric Capitaine, Etienne Boursier, Antoine Scheid, Eric Moulines, Michael I. Jordan, El-Mahdi El-Mhamdi, Alain Durmus\",\"doi\":\"arxiv-2407.14332\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Collaborative learning offers a promising avenue for leveraging decentralized\\ndata. However, collaboration in groups of strategic learners is not a given. In\\nthis work, we consider strategic agents who wish to train a model together but\\nhave sampling distributions of different quality. The collaboration is\\norganized by a benevolent aggregator who gathers samples so as to maximize\\ntotal welfare, but is unaware of data quality. This setting allows us to shed\\nlight on the deleterious effect of adverse selection in collaborative learning.\\nMore precisely, we demonstrate that when data quality indices are private, the\\ncoalition may undergo a phenomenon known as unravelling, wherein it shrinks up\\nto the point that it becomes empty or solely comprised of the worst agent. We\\nshow how this issue can be addressed without making use of external transfers,\\nby proposing a novel method inspired by probabilistic verification. This\\napproach makes the grand coalition a Nash equilibrium with high probability\\ndespite information asymmetry, thereby breaking unravelling.\",\"PeriodicalId\":501316,\"journal\":{\"name\":\"arXiv - CS - Computer Science and Game Theory\",\"volume\":\"13 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Science and Game Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.14332\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Science and Game Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.14332","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Collaborative learning offers a promising avenue for leveraging decentralized
data. However, collaboration in groups of strategic learners is not a given. In
this work, we consider strategic agents who wish to train a model together but
have sampling distributions of different quality. The collaboration is
organized by a benevolent aggregator who gathers samples so as to maximize
total welfare, but is unaware of data quality. This setting allows us to shed
light on the deleterious effect of adverse selection in collaborative learning.
More precisely, we demonstrate that when data quality indices are private, the
coalition may undergo a phenomenon known as unravelling, wherein it shrinks up
to the point that it becomes empty or solely comprised of the worst agent. We
show how this issue can be addressed without making use of external transfers,
by proposing a novel method inspired by probabilistic verification. This
approach makes the grand coalition a Nash equilibrium with high probability
despite information asymmetry, thereby breaking unravelling.