{"title":"使用联邦学习和安全聚合的隐私保护机器学习","authors":"Dragos Lia, Mihai Togan","doi":"10.1109/ECAI50035.2020.9223127","DOIUrl":null,"url":null,"abstract":"Over the past few years, machine learning has been responsible for the rapid advancements in fields such as computer vision, natural language processing and speech recognition. No small part of this success is due to data becoming more and more available, often being collected in privacy-invasive ways. The aim of this work is to study the use of two privacy-preserving solutions for training machine learning models: Federated Learning (FL) and Secure Multiparty Computation (MPC). Federated learning is a subfield of machine learning that allows training models on a large, decentralized corpus of data residing on edge devices like smartphones. Instead of sharing data, users collaboratively train a model by only sending weight updates to a server. By leveraging secure multiparty computation, it can be ensured that the server cannot inspect any individual user's update. To assess the feasibility of these approaches in different settings, a client-server architecture was implemented in Python and multiple experiments were run on datasets made available by LEAF in order to investigate ways of improving the overall performance of the models trained in a federated manner.","PeriodicalId":324813,"journal":{"name":"2020 12th International Conference on Electronics, Computers and Artificial Intelligence (ECAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Privacy-Preserving Machine Learning Using Federated Learning and Secure Aggregation\",\"authors\":\"Dragos Lia, Mihai Togan\",\"doi\":\"10.1109/ECAI50035.2020.9223127\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Over the past few years, machine learning has been responsible for the rapid advancements in fields such as computer vision, natural language processing and speech recognition. No small part of this success is due to data becoming more and more available, often being collected in privacy-invasive ways. The aim of this work is to study the use of two privacy-preserving solutions for training machine learning models: Federated Learning (FL) and Secure Multiparty Computation (MPC). Federated learning is a subfield of machine learning that allows training models on a large, decentralized corpus of data residing on edge devices like smartphones. Instead of sharing data, users collaboratively train a model by only sending weight updates to a server. By leveraging secure multiparty computation, it can be ensured that the server cannot inspect any individual user's update. To assess the feasibility of these approaches in different settings, a client-server architecture was implemented in Python and multiple experiments were run on datasets made available by LEAF in order to investigate ways of improving the overall performance of the models trained in a federated manner.\",\"PeriodicalId\":324813,\"journal\":{\"name\":\"2020 12th International Conference on Electronics, Computers and Artificial Intelligence (ECAI)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 12th International Conference on Electronics, Computers and Artificial Intelligence (ECAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ECAI50035.2020.9223127\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 12th International Conference on Electronics, Computers and Artificial Intelligence (ECAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ECAI50035.2020.9223127","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Privacy-Preserving Machine Learning Using Federated Learning and Secure Aggregation
Over the past few years, machine learning has been responsible for the rapid advancements in fields such as computer vision, natural language processing and speech recognition. No small part of this success is due to data becoming more and more available, often being collected in privacy-invasive ways. The aim of this work is to study the use of two privacy-preserving solutions for training machine learning models: Federated Learning (FL) and Secure Multiparty Computation (MPC). Federated learning is a subfield of machine learning that allows training models on a large, decentralized corpus of data residing on edge devices like smartphones. Instead of sharing data, users collaboratively train a model by only sending weight updates to a server. By leveraging secure multiparty computation, it can be ensured that the server cannot inspect any individual user's update. To assess the feasibility of these approaches in different settings, a client-server architecture was implemented in Python and multiple experiments were run on datasets made available by LEAF in order to investigate ways of improving the overall performance of the models trained in a federated manner.