TextBack:水印文本分类器使用后门

Nandish Chattopadhyay, Rajan Kataria, A. Chattopadhyay
{"title":"TextBack:水印文本分类器使用后门","authors":"Nandish Chattopadhyay, Rajan Kataria, A. Chattopadhyay","doi":"10.1109/DSD57027.2022.00053","DOIUrl":null,"url":null,"abstract":"Creating high performance neural networks is ex-pensive, incurring costs that can be attributed to data collection and curation, neural architecture search and training on dedi-cated hardware accelerators. Stakeholders invested in any one or more of these aspects of deep neural network training expect as-surances on ownership and guarantees that unauthorised usage is detectable and therefore preventable. Watermarking the trained neural architectures can prove to be a solution to this. While such techniques have been demonstrated in image classification tasks, we posit that a watermarking scheme can be developed for natural language processing applications as well. In this paper, we propose TextBack, which is a watermarking technique developed for text classifiers using backdooring. We have tested for the functionality preserving properties and verifiable proof of ownership of TextBack on multiple neural architectures and datasets for text classification tasks. The watermarked models consistently generate accuracies within a range of 1 - 2% of models without any watermarking, whilst being reliably verifiable during watermarking verification. TextBack has been tested on two different kinds of Trigger Sets, which can be chosen by the owner as preferred. We have studied the efficiencies of the algorithm that embeds the watermarks by fine tuning using a combination of Trigger samples and clean samples. The benefit of using TextBack's fine tuning approach on pre-trained models from a computational cost perspective against embedding watermarks by training models from scratch is also established experimentally. This watermarking scheme is not computation intensive and adds no additional burden to the neural architecture. This makes TextBack suitable for lightweight applications on edge devices as the watermarked model can be deployed on resource-constrained hardware and SoCs when required.","PeriodicalId":211723,"journal":{"name":"2022 25th Euromicro Conference on Digital System Design (DSD)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TextBack: Watermarking Text Classifiers using Backdooring\",\"authors\":\"Nandish Chattopadhyay, Rajan Kataria, A. Chattopadhyay\",\"doi\":\"10.1109/DSD57027.2022.00053\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Creating high performance neural networks is ex-pensive, incurring costs that can be attributed to data collection and curation, neural architecture search and training on dedi-cated hardware accelerators. Stakeholders invested in any one or more of these aspects of deep neural network training expect as-surances on ownership and guarantees that unauthorised usage is detectable and therefore preventable. Watermarking the trained neural architectures can prove to be a solution to this. While such techniques have been demonstrated in image classification tasks, we posit that a watermarking scheme can be developed for natural language processing applications as well. In this paper, we propose TextBack, which is a watermarking technique developed for text classifiers using backdooring. We have tested for the functionality preserving properties and verifiable proof of ownership of TextBack on multiple neural architectures and datasets for text classification tasks. The watermarked models consistently generate accuracies within a range of 1 - 2% of models without any watermarking, whilst being reliably verifiable during watermarking verification. TextBack has been tested on two different kinds of Trigger Sets, which can be chosen by the owner as preferred. We have studied the efficiencies of the algorithm that embeds the watermarks by fine tuning using a combination of Trigger samples and clean samples. The benefit of using TextBack's fine tuning approach on pre-trained models from a computational cost perspective against embedding watermarks by training models from scratch is also established experimentally. This watermarking scheme is not computation intensive and adds no additional burden to the neural architecture. This makes TextBack suitable for lightweight applications on edge devices as the watermarked model can be deployed on resource-constrained hardware and SoCs when required.\",\"PeriodicalId\":211723,\"journal\":{\"name\":\"2022 25th Euromicro Conference on Digital System Design (DSD)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 25th Euromicro Conference on Digital System Design (DSD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DSD57027.2022.00053\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 25th Euromicro Conference on Digital System Design (DSD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSD57027.2022.00053","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

创建高性能神经网络是昂贵的,产生的成本可归因于数据收集和管理,神经架构搜索和专用硬件加速器上的训练。投资于深度神经网络训练的任何一个或多个方面的利益相关者期望确保所有权,并保证可以检测到未经授权的使用,从而可以预防。对训练好的神经结构进行水印可以证明是一种解决方案。虽然这些技术已经在图像分类任务中得到了证明,但我们认为水印方案也可以用于自然语言处理应用。在本文中,我们提出了TextBack,这是一种使用后门的文本分类器的水印技术。我们已经在多个神经架构和数据集上测试了TextBack的功能保留属性和可验证的所有权证明,用于文本分类任务。加了水印的模型产生的精度始终在没有加任何水印的模型的1 - 2%范围内,同时在水印验证过程中可以可靠地验证。TextBack已经在两种不同类型的触发集上进行了测试,可以由所有者选择作为首选。我们研究了该算法的效率,该算法通过使用触发器样本和干净样本的组合进行微调来嵌入水印。从计算成本的角度来看,使用TextBack对预训练模型的微调方法对从头开始训练模型嵌入水印的好处也得到了实验证实。该水印方案计算量小,不增加神经网络结构的额外负担。这使得TextBack适合边缘设备上的轻量级应用程序,因为水印模型可以在需要时部署在资源受限的硬件和soc上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
TextBack: Watermarking Text Classifiers using Backdooring
Creating high performance neural networks is ex-pensive, incurring costs that can be attributed to data collection and curation, neural architecture search and training on dedi-cated hardware accelerators. Stakeholders invested in any one or more of these aspects of deep neural network training expect as-surances on ownership and guarantees that unauthorised usage is detectable and therefore preventable. Watermarking the trained neural architectures can prove to be a solution to this. While such techniques have been demonstrated in image classification tasks, we posit that a watermarking scheme can be developed for natural language processing applications as well. In this paper, we propose TextBack, which is a watermarking technique developed for text classifiers using backdooring. We have tested for the functionality preserving properties and verifiable proof of ownership of TextBack on multiple neural architectures and datasets for text classification tasks. The watermarked models consistently generate accuracies within a range of 1 - 2% of models without any watermarking, whilst being reliably verifiable during watermarking verification. TextBack has been tested on two different kinds of Trigger Sets, which can be chosen by the owner as preferred. We have studied the efficiencies of the algorithm that embeds the watermarks by fine tuning using a combination of Trigger samples and clean samples. The benefit of using TextBack's fine tuning approach on pre-trained models from a computational cost perspective against embedding watermarks by training models from scratch is also established experimentally. This watermarking scheme is not computation intensive and adds no additional burden to the neural architecture. This makes TextBack suitable for lightweight applications on edge devices as the watermarked model can be deployed on resource-constrained hardware and SoCs when required.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信