{"title":"Brokered Agreements in Multi-Party Machine Learning","authors":"Clement Fung, Ivan Beschastnikh","doi":"10.1145/3343737.3343744","DOIUrl":null,"url":null,"abstract":"Rapid machine learning (ML) adoption across a range of industries has prompted numerous concerns. These range from privacy (how is my data being used?) to fairness (is this model's result representative?) and provenance (who is using my data and how can I restrict this usage?).\n Now that ML is widely used, we believe it is time to rethink security, privacy, and incentives in the ML pipeline by re-considering control. We consider distributed multi-party ML proposals and identify their shortcomings. We then propose brokered learning, which distinguishes the curator (who determines the training set-up) from that of the broker coordinator (who runs the training process). We consider the implications of this setup and present evaluation results from implementing and deploying TorMentor, an example of a brokered learning system that implements the first distributed ML training system with anonymity guarantees.","PeriodicalId":202924,"journal":{"name":"Asia Pacific Workshop on Systems","volume":"156 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Asia Pacific Workshop on Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3343737.3343744","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Rapid machine learning (ML) adoption across a range of industries has prompted numerous concerns. These range from privacy (how is my data being used?) to fairness (is this model's result representative?) and provenance (who is using my data and how can I restrict this usage?).
Now that ML is widely used, we believe it is time to rethink security, privacy, and incentives in the ML pipeline by re-considering control. We consider distributed multi-party ML proposals and identify their shortcomings. We then propose brokered learning, which distinguishes the curator (who determines the training set-up) from that of the broker coordinator (who runs the training process). We consider the implications of this setup and present evaluation results from implementing and deploying TorMentor, an example of a brokered learning system that implements the first distributed ML training system with anonymity guarantees.