{"title":"Cascade Ownership Verification Framework Based on Invisible Watermark for Model Copyright Protection","authors":"Ruoxi Wang, Yujia Zhu, Xia Daoxun","doi":"10.1002/cpe.8394","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Successfully training a model requires substantial computational power, excellent model design, and high training costs, which implies that a well-trained model holds significant commercial value. Protecting a trained Deep Neural Network (DNN) model from Intellectual Property (IP) infringement has become a matter of intense concern recently. Particularly, embedding and verifying watermarks in black-box models without accessing internal model parameters, while ensuring the robustness and invisibility of the watermark, remains a challenging issue. Unlike many existing methods, we propose a cascade ownership verification framework based on invisible watermarks, with a focus on how to effectively protect the copyright of black-box watermark models and detect unauthorized users' infringement behaviors. This framework consists of two parts: watermark generation and copyright verification. In the watermark generation phase, watermarked samples are generated from key samples and label images. The difference between watermarked samples and key samples is imperceptible, while a specific identifier has been injected into the watermarked samples, leaving a backdoor as an entry point for copyright verification. The copyright verification phase employs hypothesis testing to enhance the confidence level of verification. In image classification tasks based on MNIST, CIFAR-10, and CIFAR-100 datasets, experiments were conducted on several popular deep learning models. The experimental results show that this framework offers high security and effectiveness in protecting model copyrights and demonstrates strong robustness against pruning and fine-tuning attacks.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.8394","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Successfully training a model requires substantial computational power, excellent model design, and high training costs, which implies that a well-trained model holds significant commercial value. Protecting a trained Deep Neural Network (DNN) model from Intellectual Property (IP) infringement has become a matter of intense concern recently. Particularly, embedding and verifying watermarks in black-box models without accessing internal model parameters, while ensuring the robustness and invisibility of the watermark, remains a challenging issue. Unlike many existing methods, we propose a cascade ownership verification framework based on invisible watermarks, with a focus on how to effectively protect the copyright of black-box watermark models and detect unauthorized users' infringement behaviors. This framework consists of two parts: watermark generation and copyright verification. In the watermark generation phase, watermarked samples are generated from key samples and label images. The difference between watermarked samples and key samples is imperceptible, while a specific identifier has been injected into the watermarked samples, leaving a backdoor as an entry point for copyright verification. The copyright verification phase employs hypothesis testing to enhance the confidence level of verification. In image classification tasks based on MNIST, CIFAR-10, and CIFAR-100 datasets, experiments were conducted on several popular deep learning models. The experimental results show that this framework offers high security and effectiveness in protecting model copyrights and demonstrates strong robustness against pruning and fine-tuning attacks.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.