Alaa Fkirin, Ahmed Samy Moursi, Gamal Attiya, Ayman El-Sayed, Marwa A. Shouman
{"title":"保护预训练 DNN 模型所有权的混合两级保护系统","authors":"Alaa Fkirin, Ahmed Samy Moursi, Gamal Attiya, Ayman El-Sayed, Marwa A. Shouman","doi":"10.1007/s00521-024-10304-0","DOIUrl":null,"url":null,"abstract":"<p>Recent advancements in deep neural networks (DNNs) have made them indispensable for numerous commercial applications. These include healthcare systems and self-driving cars. Training DNN models typically demands substantial time, vast datasets and high computational costs. However, these valuable models face significant risks. Attackers can steal and sell pre-trained DNN models for profit. Unauthorised sharing of these models poses a serious threat. Once sold, they can be easily copied and redistributed. Therefore, a well-built pre-trained DNN model is a valuable asset that requires protection. This paper introduces a robust hybrid two-level protection system for safeguarding the ownership of pre-trained DNN models. The first-level employs zero-bit watermarking. The second-level incorporates an adversarial attack as a watermark by using a perturbation technique to embed the watermark. The robustness of the proposed system is evaluated against seven types of attacks. These are Fast Gradient Method Attack, Auto Projected Gradient Descent Attack, Auto Conjugate Gradient Attack, Basic Iterative Method Attack, Momentum Iterative Method Attack, Square Attack and Auto Attack. The proposed two-level protection system withstands all seven attack types. It maintains accuracy and surpasses current state-of-the-art methods.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hybrid two-level protection system for preserving pre-trained DNN models ownership\",\"authors\":\"Alaa Fkirin, Ahmed Samy Moursi, Gamal Attiya, Ayman El-Sayed, Marwa A. Shouman\",\"doi\":\"10.1007/s00521-024-10304-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Recent advancements in deep neural networks (DNNs) have made them indispensable for numerous commercial applications. These include healthcare systems and self-driving cars. Training DNN models typically demands substantial time, vast datasets and high computational costs. However, these valuable models face significant risks. Attackers can steal and sell pre-trained DNN models for profit. Unauthorised sharing of these models poses a serious threat. Once sold, they can be easily copied and redistributed. Therefore, a well-built pre-trained DNN model is a valuable asset that requires protection. This paper introduces a robust hybrid two-level protection system for safeguarding the ownership of pre-trained DNN models. The first-level employs zero-bit watermarking. The second-level incorporates an adversarial attack as a watermark by using a perturbation technique to embed the watermark. The robustness of the proposed system is evaluated against seven types of attacks. These are Fast Gradient Method Attack, Auto Projected Gradient Descent Attack, Auto Conjugate Gradient Attack, Basic Iterative Method Attack, Momentum Iterative Method Attack, Square Attack and Auto Attack. The proposed two-level protection system withstands all seven attack types. It maintains accuracy and surpasses current state-of-the-art methods.</p>\",\"PeriodicalId\":18925,\"journal\":{\"name\":\"Neural Computing and Applications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Computing and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s00521-024-10304-0\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computing and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00521-024-10304-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hybrid two-level protection system for preserving pre-trained DNN models ownership
Recent advancements in deep neural networks (DNNs) have made them indispensable for numerous commercial applications. These include healthcare systems and self-driving cars. Training DNN models typically demands substantial time, vast datasets and high computational costs. However, these valuable models face significant risks. Attackers can steal and sell pre-trained DNN models for profit. Unauthorised sharing of these models poses a serious threat. Once sold, they can be easily copied and redistributed. Therefore, a well-built pre-trained DNN model is a valuable asset that requires protection. This paper introduces a robust hybrid two-level protection system for safeguarding the ownership of pre-trained DNN models. The first-level employs zero-bit watermarking. The second-level incorporates an adversarial attack as a watermark by using a perturbation technique to embed the watermark. The robustness of the proposed system is evaluated against seven types of attacks. These are Fast Gradient Method Attack, Auto Projected Gradient Descent Attack, Auto Conjugate Gradient Attack, Basic Iterative Method Attack, Momentum Iterative Method Attack, Square Attack and Auto Attack. The proposed two-level protection system withstands all seven attack types. It maintains accuracy and surpasses current state-of-the-art methods.