深度学习和联邦学习模型的知识产权保护

F. Koushanfar
{"title":"深度学习和联邦学习模型的知识产权保护","authors":"F. Koushanfar","doi":"10.1145/3531536.3532957","DOIUrl":null,"url":null,"abstract":"This talk focuses on end-to-end protection of the present and emerging Deep Learning (DL) and Federated Learning (FL) models. On the one hand, DL and FL models are usually trained by allocating significant computational resources to process massive training data. The built models are therefore considered as the owner's IP and need to be protected. On the other hand, malicious attackers may take advantage of the models for illegal usages. IP protection needs to be considered during the design and training of the DL models before the owners make their models publicly available. The tremendous parameter space of DL models allows them to learn hidden features automatically. We explore the 'over-parameterization' of DL models and demonstrate how to hide additional information within DL. Particularly, we discuss a number of our end-to-end automated frameworks over the past few years that leverage information hiding for IP protection, including: DeepSigns[5] and DeepMarks[2], the first DL watermarking and fingerprinting frameworks that work by embedding the owner's signature in the dynamic activations and output behaviors of the DL model; DeepAttest[1], the first hardware-based attestation framework for verifying the legitimacy of the deployed model via on-device attestation. We also develop a multi-bit black-box DNN watermarking scheme[3] and demonstrate spread spectrum-based DL watermarking[4]. In the context of Federated Learning (FL), we show how these results can be leveraged for the design of a novel holistic covert communication framework that allows stealthy information sharing between local clients while preserving FL convergence. We conclude by outlining the open challenges and emerging directions.","PeriodicalId":164949,"journal":{"name":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Intellectual Property (IP) Protection for Deep Learning and Federated Learning Models\",\"authors\":\"F. Koushanfar\",\"doi\":\"10.1145/3531536.3532957\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This talk focuses on end-to-end protection of the present and emerging Deep Learning (DL) and Federated Learning (FL) models. On the one hand, DL and FL models are usually trained by allocating significant computational resources to process massive training data. The built models are therefore considered as the owner's IP and need to be protected. On the other hand, malicious attackers may take advantage of the models for illegal usages. IP protection needs to be considered during the design and training of the DL models before the owners make their models publicly available. The tremendous parameter space of DL models allows them to learn hidden features automatically. We explore the 'over-parameterization' of DL models and demonstrate how to hide additional information within DL. Particularly, we discuss a number of our end-to-end automated frameworks over the past few years that leverage information hiding for IP protection, including: DeepSigns[5] and DeepMarks[2], the first DL watermarking and fingerprinting frameworks that work by embedding the owner's signature in the dynamic activations and output behaviors of the DL model; DeepAttest[1], the first hardware-based attestation framework for verifying the legitimacy of the deployed model via on-device attestation. We also develop a multi-bit black-box DNN watermarking scheme[3] and demonstrate spread spectrum-based DL watermarking[4]. In the context of Federated Learning (FL), we show how these results can be leveraged for the design of a novel holistic covert communication framework that allows stealthy information sharing between local clients while preserving FL convergence. We conclude by outlining the open challenges and emerging directions.\",\"PeriodicalId\":164949,\"journal\":{\"name\":\"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3531536.3532957\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3531536.3532957","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本次演讲的重点是当前和新兴的深度学习(DL)和联邦学习(FL)模型的端到端保护。一方面,DL和FL模型通常通过分配大量的计算资源来处理大量的训练数据来训练。因此,构建的模型被视为所有者的知识产权,需要得到保护。另一方面,恶意攻击者可能会利用这些模型进行非法使用。在业主公开其模型之前,在设计和培训DL模型时需要考虑知识产权保护。深度学习模型巨大的参数空间使其能够自动学习隐藏特征。我们探讨了深度学习模型的“过度参数化”,并演示了如何在深度学习中隐藏额外的信息。特别是,我们在过去几年中讨论了一些利用信息隐藏进行IP保护的端到端自动化框架,包括:DeepSigns[5]和DeepMarks[2],这是第一个深度学习水印和指纹识别框架,通过在深度学习模型的动态激活和输出行为中嵌入所有者的签名来工作;deeptest[1],第一个基于硬件的认证框架,通过设备上认证来验证部署模型的合法性。我们还开发了一种多比特黑盒DNN水印方案[3],并演示了基于扩频的深度学习水印[4]。在联邦学习(FL)的背景下,我们展示了如何利用这些结果来设计一种新的整体隐蔽通信框架,该框架允许在保持FL收敛的同时在本地客户端之间秘密共享信息。最后,我们概述了开放的挑战和新兴的方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Intellectual Property (IP) Protection for Deep Learning and Federated Learning Models
This talk focuses on end-to-end protection of the present and emerging Deep Learning (DL) and Federated Learning (FL) models. On the one hand, DL and FL models are usually trained by allocating significant computational resources to process massive training data. The built models are therefore considered as the owner's IP and need to be protected. On the other hand, malicious attackers may take advantage of the models for illegal usages. IP protection needs to be considered during the design and training of the DL models before the owners make their models publicly available. The tremendous parameter space of DL models allows them to learn hidden features automatically. We explore the 'over-parameterization' of DL models and demonstrate how to hide additional information within DL. Particularly, we discuss a number of our end-to-end automated frameworks over the past few years that leverage information hiding for IP protection, including: DeepSigns[5] and DeepMarks[2], the first DL watermarking and fingerprinting frameworks that work by embedding the owner's signature in the dynamic activations and output behaviors of the DL model; DeepAttest[1], the first hardware-based attestation framework for verifying the legitimacy of the deployed model via on-device attestation. We also develop a multi-bit black-box DNN watermarking scheme[3] and demonstrate spread spectrum-based DL watermarking[4]. In the context of Federated Learning (FL), we show how these results can be leveraged for the design of a novel holistic covert communication framework that allows stealthy information sharing between local clients while preserving FL convergence. We conclude by outlining the open challenges and emerging directions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信