Deyu Tong , Hongxin Han , Can Li , Fengting Wang , Weilong Kong , Na Ren
{"title":"ConZWNet:一种基于对比学习的高鲁棒性和可分辨性的零水印网络","authors":"Deyu Tong , Hongxin Han , Can Li , Fengting Wang , Weilong Kong , Na Ren","doi":"10.1016/j.jisa.2025.104139","DOIUrl":null,"url":null,"abstract":"<div><div>Zero-watermarking is an effective solution for image copyright protection without altering the original content. However, current deep learning-based methods suffer from two key limitations. First, most feature extraction networks, originally designed for classification, lack robust feature learning essential for resisting attacks. Second, conventional methods seldom incorporate the generated watermark back into training, missing opportunities to further optimize the model. To address these issues, we propose ConZWNet, a two-stage framework that integrates contrastive learning with feedback-driven zero-watermark generation. In the first stage, we use ConvNeXt to learn invariant, attack-resistant features via contrastive learning on weak–strong augmentation. In the second stage, a residual network coupled with a Multi-Layer Perceptron (MLP) fuses features from host and copyright images to produce a latent zero-watermark, which is then verified by an MLP-based copyright identification network. This feedback loop optimizes feature fusion and transforms zero-watermark generation into a self-supervised process. Extensive experiments demonstrate that ConZWNet achieves state-of-the-art robustness against various attacks while ensuring high distinguishability among host images and copyrights. Ablation studies confirm the effectiveness of components, including two-stage architecture, contrastive learning, weak–strong augmentation, and copyright identification network. The source code is publicly available at <span><span>https://github.com/hanhongxin1028/ConZWNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"93 ","pages":"Article 104139"},"PeriodicalIF":3.7000,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ConZWNet: A contrastive learning-based zero-watermarking network for high robustness and distinguishability\",\"authors\":\"Deyu Tong , Hongxin Han , Can Li , Fengting Wang , Weilong Kong , Na Ren\",\"doi\":\"10.1016/j.jisa.2025.104139\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Zero-watermarking is an effective solution for image copyright protection without altering the original content. However, current deep learning-based methods suffer from two key limitations. First, most feature extraction networks, originally designed for classification, lack robust feature learning essential for resisting attacks. Second, conventional methods seldom incorporate the generated watermark back into training, missing opportunities to further optimize the model. To address these issues, we propose ConZWNet, a two-stage framework that integrates contrastive learning with feedback-driven zero-watermark generation. In the first stage, we use ConvNeXt to learn invariant, attack-resistant features via contrastive learning on weak–strong augmentation. In the second stage, a residual network coupled with a Multi-Layer Perceptron (MLP) fuses features from host and copyright images to produce a latent zero-watermark, which is then verified by an MLP-based copyright identification network. This feedback loop optimizes feature fusion and transforms zero-watermark generation into a self-supervised process. Extensive experiments demonstrate that ConZWNet achieves state-of-the-art robustness against various attacks while ensuring high distinguishability among host images and copyrights. Ablation studies confirm the effectiveness of components, including two-stage architecture, contrastive learning, weak–strong augmentation, and copyright identification network. The source code is publicly available at <span><span>https://github.com/hanhongxin1028/ConZWNet</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":48638,\"journal\":{\"name\":\"Journal of Information Security and Applications\",\"volume\":\"93 \",\"pages\":\"Article 104139\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Information Security and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214212625001760\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212625001760","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
ConZWNet: A contrastive learning-based zero-watermarking network for high robustness and distinguishability
Zero-watermarking is an effective solution for image copyright protection without altering the original content. However, current deep learning-based methods suffer from two key limitations. First, most feature extraction networks, originally designed for classification, lack robust feature learning essential for resisting attacks. Second, conventional methods seldom incorporate the generated watermark back into training, missing opportunities to further optimize the model. To address these issues, we propose ConZWNet, a two-stage framework that integrates contrastive learning with feedback-driven zero-watermark generation. In the first stage, we use ConvNeXt to learn invariant, attack-resistant features via contrastive learning on weak–strong augmentation. In the second stage, a residual network coupled with a Multi-Layer Perceptron (MLP) fuses features from host and copyright images to produce a latent zero-watermark, which is then verified by an MLP-based copyright identification network. This feedback loop optimizes feature fusion and transforms zero-watermark generation into a self-supervised process. Extensive experiments demonstrate that ConZWNet achieves state-of-the-art robustness against various attacks while ensuring high distinguishability among host images and copyrights. Ablation studies confirm the effectiveness of components, including two-stage architecture, contrastive learning, weak–strong augmentation, and copyright identification network. The source code is publicly available at https://github.com/hanhongxin1028/ConZWNet.
期刊介绍:
Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.