Private Deep Neural Network Models Publishing for Machine Learning as a Service

Yunlong Mao, Boyu Zhu, Wenbo Hong, Zhifei Zhu, Yuan Zhang, Sheng Zhong
{"title":"Private Deep Neural Network Models Publishing for Machine Learning as a Service","authors":"Yunlong Mao, Boyu Zhu, Wenbo Hong, Zhifei Zhu, Yuan Zhang, Sheng Zhong","doi":"10.1109/IWQoS49365.2020.9212853","DOIUrl":null,"url":null,"abstract":"Machine learning as a service has emerged recently to relieve tensions between heavy deep learning tasks and increasing application demands. A deep learning service provider could help its clients to benefit from deep learning techniques at an affordable price instead of huge resource consumption. However, the service provider may have serious concerns about model privacy when a deep neural network model is published. Previous model publishing solutions mainly depend on additional artificial noise. By adding elaborated noises to parameters or gradients during the training phase, strong privacy guarantees like differential privacy could be achieved. However, this kind of approach cannot give guarantees on some other aspects, such as the quality of the disturbingly trained model and the convergence of the modified learning algorithm. In this paper, we propose an alternative private deep neural network model publishing solution, which caused no interference in the original training phase. We provide privacy, convergence and quality guarantees for the published model at the same time. Furthermore, our solution can achieve a smaller privacy budget when compared with artificial noise based training solutions proposed in previous works. Specifically, our solution gives an acceptable test accuracy with privacy budget ∊ = 1. Meanwhile, membership inference attack accuracy will be deceased from nearly 90% to around 60% across all classes.","PeriodicalId":177899,"journal":{"name":"2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IWQoS49365.2020.9212853","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Machine learning as a service has emerged recently to relieve tensions between heavy deep learning tasks and increasing application demands. A deep learning service provider could help its clients to benefit from deep learning techniques at an affordable price instead of huge resource consumption. However, the service provider may have serious concerns about model privacy when a deep neural network model is published. Previous model publishing solutions mainly depend on additional artificial noise. By adding elaborated noises to parameters or gradients during the training phase, strong privacy guarantees like differential privacy could be achieved. However, this kind of approach cannot give guarantees on some other aspects, such as the quality of the disturbingly trained model and the convergence of the modified learning algorithm. In this paper, we propose an alternative private deep neural network model publishing solution, which caused no interference in the original training phase. We provide privacy, convergence and quality guarantees for the published model at the same time. Furthermore, our solution can achieve a smaller privacy budget when compared with artificial noise based training solutions proposed in previous works. Specifically, our solution gives an acceptable test accuracy with privacy budget ∊ = 1. Meanwhile, membership inference attack accuracy will be deceased from nearly 90% to around 60% across all classes.
面向机器学习即服务的私有深度神经网络模型发布
机器学习即服务最近出现,以缓解繁重的深度学习任务和不断增长的应用需求之间的紧张关系。深度学习服务提供商可以帮助其客户以合理的价格从深度学习技术中受益,而不是消耗大量资源。然而,当深度神经网络模型发布时,服务提供商可能会严重关注模型隐私问题。以前的模型发布解决方案主要依赖于额外的人工噪声。通过在训练阶段向参数或梯度中加入精细噪声,可以实现差分隐私等强隐私保证。然而,这种方法不能保证扰动训练模型的质量和改进学习算法的收敛性。在本文中,我们提出了一种替代的私有深度神经网络模型发布方案,该方案在原始训练阶段不会产生干扰。我们同时为发布的模型提供隐私性、收敛性和质量保证。此外,与之前提出的基于人工噪声的训练方案相比,我们的解决方案可以实现更小的隐私预算。具体来说,我们的解决方案在隐私预算为1的情况下给出了一个可接受的测试精度。同时,所有职业的成员推理攻击准确率将从近90%下降到60%左右。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信