合作学习中的解构

Aymeric Capitaine, Etienne Boursier, Antoine Scheid, Eric Moulines, Michael I. Jordan, El-Mahdi El-Mhamdi, Alain Durmus
{"title":"合作学习中的解构","authors":"Aymeric Capitaine, Etienne Boursier, Antoine Scheid, Eric Moulines, Michael I. Jordan, El-Mahdi El-Mhamdi, Alain Durmus","doi":"arxiv-2407.14332","DOIUrl":null,"url":null,"abstract":"Collaborative learning offers a promising avenue for leveraging decentralized\ndata. However, collaboration in groups of strategic learners is not a given. In\nthis work, we consider strategic agents who wish to train a model together but\nhave sampling distributions of different quality. The collaboration is\norganized by a benevolent aggregator who gathers samples so as to maximize\ntotal welfare, but is unaware of data quality. This setting allows us to shed\nlight on the deleterious effect of adverse selection in collaborative learning.\nMore precisely, we demonstrate that when data quality indices are private, the\ncoalition may undergo a phenomenon known as unravelling, wherein it shrinks up\nto the point that it becomes empty or solely comprised of the worst agent. We\nshow how this issue can be addressed without making use of external transfers,\nby proposing a novel method inspired by probabilistic verification. This\napproach makes the grand coalition a Nash equilibrium with high probability\ndespite information asymmetry, thereby breaking unravelling.","PeriodicalId":501316,"journal":{"name":"arXiv - CS - Computer Science and Game Theory","volume":"13 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unravelling in Collaborative Learning\",\"authors\":\"Aymeric Capitaine, Etienne Boursier, Antoine Scheid, Eric Moulines, Michael I. Jordan, El-Mahdi El-Mhamdi, Alain Durmus\",\"doi\":\"arxiv-2407.14332\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Collaborative learning offers a promising avenue for leveraging decentralized\\ndata. However, collaboration in groups of strategic learners is not a given. In\\nthis work, we consider strategic agents who wish to train a model together but\\nhave sampling distributions of different quality. The collaboration is\\norganized by a benevolent aggregator who gathers samples so as to maximize\\ntotal welfare, but is unaware of data quality. This setting allows us to shed\\nlight on the deleterious effect of adverse selection in collaborative learning.\\nMore precisely, we demonstrate that when data quality indices are private, the\\ncoalition may undergo a phenomenon known as unravelling, wherein it shrinks up\\nto the point that it becomes empty or solely comprised of the worst agent. We\\nshow how this issue can be addressed without making use of external transfers,\\nby proposing a novel method inspired by probabilistic verification. This\\napproach makes the grand coalition a Nash equilibrium with high probability\\ndespite information asymmetry, thereby breaking unravelling.\",\"PeriodicalId\":501316,\"journal\":{\"name\":\"arXiv - CS - Computer Science and Game Theory\",\"volume\":\"13 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Science and Game Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.14332\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Science and Game Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.14332","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

协作学习为利用分散数据提供了一条大有可为的途径。然而,战略学习者群体中的协作并不是必然的。在这项工作中,我们考虑的是希望共同训练模型但采样分布质量不同的战略代理。合作由一个仁慈的聚合者组织,他收集样本以最大化总福利,但并不了解数据质量。更准确地说,我们证明了当数据质量指标不公开时,联盟可能会出现一种被称为 "解构 "的现象,即联盟不断缩小,最后变成空壳或只剩下最差的代理。通过提出一种受概率验证启发的新方法,我们展示了如何在不使用外部转移的情况下解决这一问题。尽管信息不对称,这种方法仍能使大联盟成为高概率的纳什均衡,从而打破解蔽。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Unravelling in Collaborative Learning
Collaborative learning offers a promising avenue for leveraging decentralized data. However, collaboration in groups of strategic learners is not a given. In this work, we consider strategic agents who wish to train a model together but have sampling distributions of different quality. The collaboration is organized by a benevolent aggregator who gathers samples so as to maximize total welfare, but is unaware of data quality. This setting allows us to shed light on the deleterious effect of adverse selection in collaborative learning. More precisely, we demonstrate that when data quality indices are private, the coalition may undergo a phenomenon known as unravelling, wherein it shrinks up to the point that it becomes empty or solely comprised of the worst agent. We show how this issue can be addressed without making use of external transfers, by proposing a novel method inspired by probabilistic verification. This approach makes the grand coalition a Nash equilibrium with high probability despite information asymmetry, thereby breaking unravelling.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信