使用匿名协议保护隐私的深度学习模型

A. Tran, T. Luong, V. Dang, V. Huynh
{"title":"使用匿名协议保护隐私的深度学习模型","authors":"A. Tran, T. Luong, V. Dang, V. Huynh","doi":"10.1109/NICS51282.2020.9335880","DOIUrl":null,"url":null,"abstract":"Deep learning is an effective approach to many real-world problems. The effectiveness of deep learning models depends largely on the amount of data being used to train the model. However, these data are often private or sensitive, which make it challenging to collect and apply deep learning models in practice. In this paper, we introduce an anonymous deep neural network training protocol called ATP (Anonymous Training Protocol), in which each party owns a private dataset and collectively trains a global model without any data leakage to other parties. To achieve this, we use the technique of sharing random gradients with large aggregate mini-batch sizes combined with the addition of temporary random noise. These random noises will then be sent back through an anonymous network to be filtered out during the update phase of the aggregate server. The proposed ATP model allows protection of the shared gradients even when the aggregating server colludes with other n-2 participants. We evaluate the model on the MNIST dataset with the CNN network architecture, resulting in an accuracy of 98.09%. The results show that the proposed ATP model has high practical applicability in protecting privacy in deep learning.","PeriodicalId":308944,"journal":{"name":"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Using Anonymous Protocol for Privacy Preserving Deep Learning Model\",\"authors\":\"A. Tran, T. Luong, V. Dang, V. Huynh\",\"doi\":\"10.1109/NICS51282.2020.9335880\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning is an effective approach to many real-world problems. The effectiveness of deep learning models depends largely on the amount of data being used to train the model. However, these data are often private or sensitive, which make it challenging to collect and apply deep learning models in practice. In this paper, we introduce an anonymous deep neural network training protocol called ATP (Anonymous Training Protocol), in which each party owns a private dataset and collectively trains a global model without any data leakage to other parties. To achieve this, we use the technique of sharing random gradients with large aggregate mini-batch sizes combined with the addition of temporary random noise. These random noises will then be sent back through an anonymous network to be filtered out during the update phase of the aggregate server. The proposed ATP model allows protection of the shared gradients even when the aggregating server colludes with other n-2 participants. We evaluate the model on the MNIST dataset with the CNN network architecture, resulting in an accuracy of 98.09%. The results show that the proposed ATP model has high practical applicability in protecting privacy in deep learning.\",\"PeriodicalId\":308944,\"journal\":{\"name\":\"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)\",\"volume\":\"36 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NICS51282.2020.9335880\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NICS51282.2020.9335880","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

深度学习是解决许多现实问题的有效方法。深度学习模型的有效性在很大程度上取决于用于训练模型的数据量。然而,这些数据通常是私有的或敏感的,这使得在实践中收集和应用深度学习模型具有挑战性。在本文中,我们引入了一种匿名深度神经网络训练协议ATP (anonymous training protocol,匿名训练协议),其中每一方都拥有一个私有数据集,共同训练一个全局模型,而不会向其他方泄露任何数据。为了实现这一点,我们使用了共享随机梯度的技术,并结合了临时随机噪声的添加。然后,这些随机噪声将通过匿名网络发送回来,以便在聚合服务器的更新阶段过滤掉。提出的ATP模型允许在聚合服务器与其他n-2参与者串通时保护共享梯度。我们使用CNN网络架构在MNIST数据集上对模型进行评估,结果准确率达到98.09%。结果表明,所提出的ATP模型在深度学习中的隐私保护方面具有较高的实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Using Anonymous Protocol for Privacy Preserving Deep Learning Model
Deep learning is an effective approach to many real-world problems. The effectiveness of deep learning models depends largely on the amount of data being used to train the model. However, these data are often private or sensitive, which make it challenging to collect and apply deep learning models in practice. In this paper, we introduce an anonymous deep neural network training protocol called ATP (Anonymous Training Protocol), in which each party owns a private dataset and collectively trains a global model without any data leakage to other parties. To achieve this, we use the technique of sharing random gradients with large aggregate mini-batch sizes combined with the addition of temporary random noise. These random noises will then be sent back through an anonymous network to be filtered out during the update phase of the aggregate server. The proposed ATP model allows protection of the shared gradients even when the aggregating server colludes with other n-2 participants. We evaluate the model on the MNIST dataset with the CNN network architecture, resulting in an accuracy of 98.09%. The results show that the proposed ATP model has high practical applicability in protecting privacy in deep learning.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信