Yujia Zhu, Jia Luo, Ruoxi Wang, Xiaojie Du, Daoxun Xia
{"title":"基于稀疏极化剪枝的多用户分层授权模型主动保护","authors":"Yujia Zhu, Jia Luo, Ruoxi Wang, Xiaojie Du, Daoxun Xia","doi":"10.1002/cpe.70076","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Currently, artificial intelligence technology is rapidly penetrating into various fields of socioeconomic development with increasing depth and breadth, becoming an important force driving innovation and development, empowering thousands of industries, while also bringing challenges such as security governance. The application of deep neural network models must implement hierarchical access based on user permissions to prevent unauthorized users from accessing and abusing the model, and to prevent malicious attackers from tampering or damaging the model, thereby reducing its vulnerabilities and security risks. To address this issue, the model provider must implement a hierarchical authorization policy for the model, which can grant users access to the model based on their specific needs, while ensuring that unauthorized users cannot use the model. Common methods for implementing hierarchical authorization of models include pruning and encryption, but existing technologies require high computational complexity and have unclear hierarchical effects. In this article, we propose a sparsity polarization pruning approach for layered authorization, which combines sparsity regularization to filter insignificant channels and a polarization technique to cluster critical channels into distinct intervals. By pruning channels based on polarized scaling factors from the batch normalization (BN) layer, our method dynamically adjusts model precision to match user authorization levels. Initially, we extract the scaling factor of the BN layer to assess the importance of each channel. A sparsity regularizer is then applied to filter out irrelevant scaling factors. To enhance the clarity and rationality of pruning intervals, we use a polarization technique to induce clustering of scaling factors. So we proposed multiuser hierarchical authorization using sparsity polarization pruning for model active protection. Based on the grading requirements, we prune channels corresponding to varying numbers of significant scaling factors. Access is granted at different levels depending on the precision key provided by the user, thereby ensuring a secure and efficient means of accessing the model's resources. Experimental results demonstrate that our approach achieves superior grading performance across three datasets and two different neural networks, showcasing its broad applicability. Moreover, our method achieves effective grading just by pruning a small portion of the channels, offering a high level of efficiency.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multiuser Hierarchical Authorization Using Sparsity Polarization Pruning for Model Active Protection\",\"authors\":\"Yujia Zhu, Jia Luo, Ruoxi Wang, Xiaojie Du, Daoxun Xia\",\"doi\":\"10.1002/cpe.70076\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Currently, artificial intelligence technology is rapidly penetrating into various fields of socioeconomic development with increasing depth and breadth, becoming an important force driving innovation and development, empowering thousands of industries, while also bringing challenges such as security governance. The application of deep neural network models must implement hierarchical access based on user permissions to prevent unauthorized users from accessing and abusing the model, and to prevent malicious attackers from tampering or damaging the model, thereby reducing its vulnerabilities and security risks. To address this issue, the model provider must implement a hierarchical authorization policy for the model, which can grant users access to the model based on their specific needs, while ensuring that unauthorized users cannot use the model. Common methods for implementing hierarchical authorization of models include pruning and encryption, but existing technologies require high computational complexity and have unclear hierarchical effects. In this article, we propose a sparsity polarization pruning approach for layered authorization, which combines sparsity regularization to filter insignificant channels and a polarization technique to cluster critical channels into distinct intervals. By pruning channels based on polarized scaling factors from the batch normalization (BN) layer, our method dynamically adjusts model precision to match user authorization levels. Initially, we extract the scaling factor of the BN layer to assess the importance of each channel. A sparsity regularizer is then applied to filter out irrelevant scaling factors. To enhance the clarity and rationality of pruning intervals, we use a polarization technique to induce clustering of scaling factors. So we proposed multiuser hierarchical authorization using sparsity polarization pruning for model active protection. Based on the grading requirements, we prune channels corresponding to varying numbers of significant scaling factors. Access is granted at different levels depending on the precision key provided by the user, thereby ensuring a secure and efficient means of accessing the model's resources. Experimental results demonstrate that our approach achieves superior grading performance across three datasets and two different neural networks, showcasing its broad applicability. Moreover, our method achieves effective grading just by pruning a small portion of the channels, offering a high level of efficiency.</p>\\n </div>\",\"PeriodicalId\":55214,\"journal\":{\"name\":\"Concurrency and Computation-Practice & Experience\",\"volume\":\"37 9-11\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2025-04-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Concurrency and Computation-Practice & Experience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70076\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70076","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Multiuser Hierarchical Authorization Using Sparsity Polarization Pruning for Model Active Protection
Currently, artificial intelligence technology is rapidly penetrating into various fields of socioeconomic development with increasing depth and breadth, becoming an important force driving innovation and development, empowering thousands of industries, while also bringing challenges such as security governance. The application of deep neural network models must implement hierarchical access based on user permissions to prevent unauthorized users from accessing and abusing the model, and to prevent malicious attackers from tampering or damaging the model, thereby reducing its vulnerabilities and security risks. To address this issue, the model provider must implement a hierarchical authorization policy for the model, which can grant users access to the model based on their specific needs, while ensuring that unauthorized users cannot use the model. Common methods for implementing hierarchical authorization of models include pruning and encryption, but existing technologies require high computational complexity and have unclear hierarchical effects. In this article, we propose a sparsity polarization pruning approach for layered authorization, which combines sparsity regularization to filter insignificant channels and a polarization technique to cluster critical channels into distinct intervals. By pruning channels based on polarized scaling factors from the batch normalization (BN) layer, our method dynamically adjusts model precision to match user authorization levels. Initially, we extract the scaling factor of the BN layer to assess the importance of each channel. A sparsity regularizer is then applied to filter out irrelevant scaling factors. To enhance the clarity and rationality of pruning intervals, we use a polarization technique to induce clustering of scaling factors. So we proposed multiuser hierarchical authorization using sparsity polarization pruning for model active protection. Based on the grading requirements, we prune channels corresponding to varying numbers of significant scaling factors. Access is granted at different levels depending on the precision key provided by the user, thereby ensuring a secure and efficient means of accessing the model's resources. Experimental results demonstrate that our approach achieves superior grading performance across three datasets and two different neural networks, showcasing its broad applicability. Moreover, our method achieves effective grading just by pruning a small portion of the channels, offering a high level of efficiency.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.