{"title":"DeepAW:一种针对不可靠参与者的定制DNN水印方案","authors":"Shen Lin;Xiaoyu Zhang;Xu Ma;Xiaofeng Chen;Willy Susilo","doi":"10.1109/TNSE.2025.3553673","DOIUrl":null,"url":null,"abstract":"Training DNNs requires large amounts of labeled data, costly computational resources, and tremendous human effort, resulting in such models being a valuable commodity. In collaborative learning scenarios, unreliable participants are widespread due to data collected from a diverse set of end-users that differ in quality and quantity. It is important to note that failure to take into account the contributions of all participants in the collaborative model training process when sharing the model with them could potentially result in a deterioration in collaborative efforts. In this paper, we propose a customized DNN watermarking scheme to safeguard the model ownership, namely <italic>DeepAW</i>, achieving robustness to model stealing attacks and collaborative fairness in the presence of unreliable participants. Specifically, <italic>DeepAW</i> leverages the tightly binding between the embedded watermarking and the model performance to defend against the model stealing attacks, resulting in the sharp decline of the model performance encountering any attempt at watermarking modification. <italic>DeepAW</i> achieves collaborative fairness by detecting unreliable participants and customizing the model performance according to the participants' contributions. Furthermore, we set up three model stealing attacks and four types of unreliable participants. The experimental results demonstrate the effectiveness, robustness, and collaborative fairness of <italic>DeepAW</i>.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 4","pages":"2758-2769"},"PeriodicalIF":7.9000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DeepAW: A Customized DNN Watermarking Scheme Against Unreliable Participants\",\"authors\":\"Shen Lin;Xiaoyu Zhang;Xu Ma;Xiaofeng Chen;Willy Susilo\",\"doi\":\"10.1109/TNSE.2025.3553673\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Training DNNs requires large amounts of labeled data, costly computational resources, and tremendous human effort, resulting in such models being a valuable commodity. In collaborative learning scenarios, unreliable participants are widespread due to data collected from a diverse set of end-users that differ in quality and quantity. It is important to note that failure to take into account the contributions of all participants in the collaborative model training process when sharing the model with them could potentially result in a deterioration in collaborative efforts. In this paper, we propose a customized DNN watermarking scheme to safeguard the model ownership, namely <italic>DeepAW</i>, achieving robustness to model stealing attacks and collaborative fairness in the presence of unreliable participants. Specifically, <italic>DeepAW</i> leverages the tightly binding between the embedded watermarking and the model performance to defend against the model stealing attacks, resulting in the sharp decline of the model performance encountering any attempt at watermarking modification. <italic>DeepAW</i> achieves collaborative fairness by detecting unreliable participants and customizing the model performance according to the participants' contributions. Furthermore, we set up three model stealing attacks and four types of unreliable participants. The experimental results demonstrate the effectiveness, robustness, and collaborative fairness of <italic>DeepAW</i>.\",\"PeriodicalId\":54229,\"journal\":{\"name\":\"IEEE Transactions on Network Science and Engineering\",\"volume\":\"12 4\",\"pages\":\"2758-2769\"},\"PeriodicalIF\":7.9000,\"publicationDate\":\"2025-03-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Network Science and Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10937112/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10937112/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
DeepAW: A Customized DNN Watermarking Scheme Against Unreliable Participants
Training DNNs requires large amounts of labeled data, costly computational resources, and tremendous human effort, resulting in such models being a valuable commodity. In collaborative learning scenarios, unreliable participants are widespread due to data collected from a diverse set of end-users that differ in quality and quantity. It is important to note that failure to take into account the contributions of all participants in the collaborative model training process when sharing the model with them could potentially result in a deterioration in collaborative efforts. In this paper, we propose a customized DNN watermarking scheme to safeguard the model ownership, namely DeepAW, achieving robustness to model stealing attacks and collaborative fairness in the presence of unreliable participants. Specifically, DeepAW leverages the tightly binding between the embedded watermarking and the model performance to defend against the model stealing attacks, resulting in the sharp decline of the model performance encountering any attempt at watermarking modification. DeepAW achieves collaborative fairness by detecting unreliable participants and customizing the model performance according to the participants' contributions. Furthermore, we set up three model stealing attacks and four types of unreliable participants. The experimental results demonstrate the effectiveness, robustness, and collaborative fairness of DeepAW.
期刊介绍:
The proposed journal, called the IEEE Transactions on Network Science and Engineering (TNSE), is committed to timely publishing of peer-reviewed technical articles that deal with the theory and applications of network science and the interconnections among the elements in a system that form a network. In particular, the IEEE Transactions on Network Science and Engineering publishes articles on understanding, prediction, and control of structures and behaviors of networks at the fundamental level. The types of networks covered include physical or engineered networks, information networks, biological networks, semantic networks, economic networks, social networks, and ecological networks. Aimed at discovering common principles that govern network structures, network functionalities and behaviors of networks, the journal seeks articles on understanding, prediction, and control of structures and behaviors of networks. Another trans-disciplinary focus of the IEEE Transactions on Network Science and Engineering is the interactions between and co-evolution of different genres of networks.