LOCAT: Localization-Driven Text Watermarking via Large Language Models

IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Liang Ding;Xi Yang;Yang Yang;Weiming Zhang
{"title":"LOCAT: Localization-Driven Text Watermarking via Large Language Models","authors":"Liang Ding;Xi Yang;Yang Yang;Weiming Zhang","doi":"10.1109/TCSVT.2025.3570858","DOIUrl":null,"url":null,"abstract":"The rapid advancement of large language models (LLMs) has raised concerns regarding potential misuse and underscores the importance of verifying text authenticity. Text watermarking, which embeds covert identifiers into generated content, offers a viable means for such verification. Such watermarking can be implemented either by modifying the generation process of an LLM or via post-processing techniques like lexical substitution, with the latter being particularly valuable when access to model parameters is restricted. However, existing lexical substitution-based methods often face a trade-off between maintaining text quality and ensuring robust watermarking. Addressing this limitation, our work focuses on enhancing both the robustness and imperceptibility of text watermarks within the lexical substitution paradigm. We propose a localization-based watermarking method that enhances robustness while maintaining text naturalness. First, a precise localization module identifies optimal substitution targets. Then, we leverage LLMs to generate contextually appropriate synonyms, and the watermark is embedded through binary-encoded substitutions. To address different usage scenarios, we focus on the trade-off between watermark robustness and text quality. Compared to existing methods, our approach significantly enhances watermark robustness while maintaining comparable text quality and achieves similar robustness levels while improving text quality. Even under severe semantic distortions, including word deletion, synonym substitution, polishing, and re-translation, the watermark remains detectable.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 8","pages":"8406-8420"},"PeriodicalIF":11.1000,"publicationDate":"2025-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11006126/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

The rapid advancement of large language models (LLMs) has raised concerns regarding potential misuse and underscores the importance of verifying text authenticity. Text watermarking, which embeds covert identifiers into generated content, offers a viable means for such verification. Such watermarking can be implemented either by modifying the generation process of an LLM or via post-processing techniques like lexical substitution, with the latter being particularly valuable when access to model parameters is restricted. However, existing lexical substitution-based methods often face a trade-off between maintaining text quality and ensuring robust watermarking. Addressing this limitation, our work focuses on enhancing both the robustness and imperceptibility of text watermarks within the lexical substitution paradigm. We propose a localization-based watermarking method that enhances robustness while maintaining text naturalness. First, a precise localization module identifies optimal substitution targets. Then, we leverage LLMs to generate contextually appropriate synonyms, and the watermark is embedded through binary-encoded substitutions. To address different usage scenarios, we focus on the trade-off between watermark robustness and text quality. Compared to existing methods, our approach significantly enhances watermark robustness while maintaining comparable text quality and achieves similar robustness levels while improving text quality. Even under severe semantic distortions, including word deletion, synonym substitution, polishing, and re-translation, the watermark remains detectable.
LOCAT:基于大型语言模型的本地化驱动文本水印
大型语言模型(llm)的快速发展引起了人们对潜在误用的担忧,并强调了验证文本真实性的重要性。文本水印将隐蔽标识符嵌入到生成的内容中,为这种验证提供了一种可行的方法。这种水印可以通过修改LLM的生成过程来实现,也可以通过词法替换等后处理技术来实现,当访问模型参数受到限制时,后者特别有价值。然而,现有的基于词法替换的方法往往面临保持文本质量和确保鲁棒性水印之间的权衡。为了解决这一限制,我们的工作重点是在词汇替代范式中增强文本水印的鲁棒性和不可感知性。我们提出了一种基于定位的水印方法,在保持文本自然度的同时增强了鲁棒性。首先,精确定位模块确定最佳替代目标。然后,我们利用llm生成上下文合适的同义词,并通过二进制编码替换嵌入水印。为了解决不同的使用场景,我们关注水印鲁棒性和文本质量之间的权衡。与现有方法相比,我们的方法在保持相当文本质量的同时显著增强了水印鲁棒性,在提高文本质量的同时达到了相似的鲁棒性水平。即使在严重的语义扭曲下,包括单词删除、同义词替换、修饰和重新翻译,水印仍然是可检测的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
13.80
自引率
27.40%
发文量
660
审稿时长
5 months
期刊介绍: The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信