Balancing Robustness and Covertness in NLP Model Watermarking: A Multi-Task Learning Approach

Long Dai, Jiarong Mao, Liao Xu, Xuefeng Fan, Xiaoyi Zhou
{"title":"Balancing Robustness and Covertness in NLP Model Watermarking: A Multi-Task Learning Approach","authors":"Long Dai, Jiarong Mao, Liao Xu, Xuefeng Fan, Xiaoyi Zhou","doi":"10.1109/ISCC58397.2023.10218209","DOIUrl":null,"url":null,"abstract":"The popularity of ChatGPT demonstrates the immense commercial value of natural language processing (NLP) technology. However, NLP models are vulnerable to piracy and redistribution, which harms the economic interests of model owners. Existing NLP model watermarking schemes struggle to balance robustness and covertness. Robust watermarking require embedding more information, which compromises their covertness; conversely, covert watermarking are challenging to embed more information, which affects their robustness. This paper proposes an NLP model watermarking framework that uses multi-task learning to address the conflict between robustness and covertness in existing schemes. Specifically, a covert trigger set is established to implement remote verification of the watermark model, and a covert auxiliary network is designed to enhance the watermark model's robustness. The proposed watermarking framework is evaluated on two benchmark datasets and three mainstream NLP models. The experiments validate the frame-work's excellent covertness, robustness, and low false positive rate.","PeriodicalId":265337,"journal":{"name":"2023 IEEE Symposium on Computers and Communications (ISCC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Symposium on Computers and Communications (ISCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCC58397.2023.10218209","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The popularity of ChatGPT demonstrates the immense commercial value of natural language processing (NLP) technology. However, NLP models are vulnerable to piracy and redistribution, which harms the economic interests of model owners. Existing NLP model watermarking schemes struggle to balance robustness and covertness. Robust watermarking require embedding more information, which compromises their covertness; conversely, covert watermarking are challenging to embed more information, which affects their robustness. This paper proposes an NLP model watermarking framework that uses multi-task learning to address the conflict between robustness and covertness in existing schemes. Specifically, a covert trigger set is established to implement remote verification of the watermark model, and a covert auxiliary network is designed to enhance the watermark model's robustness. The proposed watermarking framework is evaluated on two benchmark datasets and three mainstream NLP models. The experiments validate the frame-work's excellent covertness, robustness, and low false positive rate.
平衡NLP模型水印的鲁棒性和隐蔽性:一种多任务学习方法
ChatGPT的流行证明了自然语言处理(NLP)技术的巨大商业价值。然而,NLP模型容易被盗版和再分配,从而损害了模型所有者的经济利益。现有的NLP模型水印方案难以平衡鲁棒性和隐匿性。鲁棒水印需要嵌入更多的信息,这就降低了水印的隐蔽性;相反,隐蔽水印很难嵌入更多的信息,这影响了其鲁棒性。本文提出了一种利用多任务学习来解决现有方案鲁棒性和隐密性冲突的NLP模型水印框架。其中,建立隐蔽触发集实现水印模型的远程验证,设计隐蔽辅助网络增强水印模型的鲁棒性。在两个基准数据集和三种主流NLP模型上对所提出的水印框架进行了评估。实验验证了该框架具有良好的隐蔽性、鲁棒性和低误报率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信