分布式编码器的隐私保护深度学习

Yitian Zhang, H. Salehinejad, J. Barfett, E. Colak, S. Valaee
{"title":"分布式编码器的隐私保护深度学习","authors":"Yitian Zhang, H. Salehinejad, J. Barfett, E. Colak, S. Valaee","doi":"10.1109/GlobalSIP45357.2019.8969086","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a distributed machine learning framework for training and inference in machine learning models using distributed data while preserving privacy of the data owner. In the training mode, we deploy an encoder on the end-user device which extracts high level features from input data. The extracted features along with the corresponding annotation are sent to a centralized machine learning server. In the inference mode, the users submit the extracted features from encoder instead of the original data for inference to the server. This approach enables users to contributed in training a machine learning model and use inference services without sharing their original data with the server or a third party. We have studied this approach on MNIST, Fashion, SVHN and CIFAR-10 datasets. The results show high classification accuracy of neural networks, trained with encoded features, and high encryption performance of the encoders.","PeriodicalId":221378,"journal":{"name":"2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"254 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Privacy Preserving Deep Learning with Distributed Encoders\",\"authors\":\"Yitian Zhang, H. Salehinejad, J. Barfett, E. Colak, S. Valaee\",\"doi\":\"10.1109/GlobalSIP45357.2019.8969086\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose a distributed machine learning framework for training and inference in machine learning models using distributed data while preserving privacy of the data owner. In the training mode, we deploy an encoder on the end-user device which extracts high level features from input data. The extracted features along with the corresponding annotation are sent to a centralized machine learning server. In the inference mode, the users submit the extracted features from encoder instead of the original data for inference to the server. This approach enables users to contributed in training a machine learning model and use inference services without sharing their original data with the server or a third party. We have studied this approach on MNIST, Fashion, SVHN and CIFAR-10 datasets. The results show high classification accuracy of neural networks, trained with encoded features, and high encryption performance of the encoders.\",\"PeriodicalId\":221378,\"journal\":{\"name\":\"2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP)\",\"volume\":\"254 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GlobalSIP45357.2019.8969086\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GlobalSIP45357.2019.8969086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

在本文中,我们提出了一个分布式机器学习框架,用于使用分布式数据在机器学习模型中进行训练和推理,同时保护数据所有者的隐私。在训练模式中,我们在终端用户设备上部署一个编码器,该编码器从输入数据中提取高级特征。提取的特征以及相应的注释被发送到集中式机器学习服务器。在推理模式中,用户将从编码器中提取的特征提交给服务器,而不是将原始数据提交给服务器进行推理。这种方法使用户能够参与训练机器学习模型并使用推理服务,而无需与服务器或第三方共享原始数据。我们已经在MNIST、Fashion、SVHN和CIFAR-10数据集上研究了这种方法。结果表明,经过编码特征训练的神经网络具有较高的分类精度,编码器具有较高的加密性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Privacy Preserving Deep Learning with Distributed Encoders
In this paper, we propose a distributed machine learning framework for training and inference in machine learning models using distributed data while preserving privacy of the data owner. In the training mode, we deploy an encoder on the end-user device which extracts high level features from input data. The extracted features along with the corresponding annotation are sent to a centralized machine learning server. In the inference mode, the users submit the extracted features from encoder instead of the original data for inference to the server. This approach enables users to contributed in training a machine learning model and use inference services without sharing their original data with the server or a third party. We have studied this approach on MNIST, Fashion, SVHN and CIFAR-10 datasets. The results show high classification accuracy of neural networks, trained with encoded features, and high encryption performance of the encoders.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信