基于稀疏极化剪枝的多用户分层授权模型主动保护

IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Yujia Zhu, Jia Luo, Ruoxi Wang, Xiaojie Du, Daoxun Xia
{"title":"基于稀疏极化剪枝的多用户分层授权模型主动保护","authors":"Yujia Zhu,&nbsp;Jia Luo,&nbsp;Ruoxi Wang,&nbsp;Xiaojie Du,&nbsp;Daoxun Xia","doi":"10.1002/cpe.70076","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Currently, artificial intelligence technology is rapidly penetrating into various fields of socioeconomic development with increasing depth and breadth, becoming an important force driving innovation and development, empowering thousands of industries, while also bringing challenges such as security governance. The application of deep neural network models must implement hierarchical access based on user permissions to prevent unauthorized users from accessing and abusing the model, and to prevent malicious attackers from tampering or damaging the model, thereby reducing its vulnerabilities and security risks. To address this issue, the model provider must implement a hierarchical authorization policy for the model, which can grant users access to the model based on their specific needs, while ensuring that unauthorized users cannot use the model. Common methods for implementing hierarchical authorization of models include pruning and encryption, but existing technologies require high computational complexity and have unclear hierarchical effects. In this article, we propose a sparsity polarization pruning approach for layered authorization, which combines sparsity regularization to filter insignificant channels and a polarization technique to cluster critical channels into distinct intervals. By pruning channels based on polarized scaling factors from the batch normalization (BN) layer, our method dynamically adjusts model precision to match user authorization levels. Initially, we extract the scaling factor of the BN layer to assess the importance of each channel. A sparsity regularizer is then applied to filter out irrelevant scaling factors. To enhance the clarity and rationality of pruning intervals, we use a polarization technique to induce clustering of scaling factors. So we proposed multiuser hierarchical authorization using sparsity polarization pruning for model active protection. Based on the grading requirements, we prune channels corresponding to varying numbers of significant scaling factors. Access is granted at different levels depending on the precision key provided by the user, thereby ensuring a secure and efficient means of accessing the model's resources. Experimental results demonstrate that our approach achieves superior grading performance across three datasets and two different neural networks, showcasing its broad applicability. Moreover, our method achieves effective grading just by pruning a small portion of the channels, offering a high level of efficiency.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multiuser Hierarchical Authorization Using Sparsity Polarization Pruning for Model Active Protection\",\"authors\":\"Yujia Zhu,&nbsp;Jia Luo,&nbsp;Ruoxi Wang,&nbsp;Xiaojie Du,&nbsp;Daoxun Xia\",\"doi\":\"10.1002/cpe.70076\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Currently, artificial intelligence technology is rapidly penetrating into various fields of socioeconomic development with increasing depth and breadth, becoming an important force driving innovation and development, empowering thousands of industries, while also bringing challenges such as security governance. The application of deep neural network models must implement hierarchical access based on user permissions to prevent unauthorized users from accessing and abusing the model, and to prevent malicious attackers from tampering or damaging the model, thereby reducing its vulnerabilities and security risks. To address this issue, the model provider must implement a hierarchical authorization policy for the model, which can grant users access to the model based on their specific needs, while ensuring that unauthorized users cannot use the model. Common methods for implementing hierarchical authorization of models include pruning and encryption, but existing technologies require high computational complexity and have unclear hierarchical effects. In this article, we propose a sparsity polarization pruning approach for layered authorization, which combines sparsity regularization to filter insignificant channels and a polarization technique to cluster critical channels into distinct intervals. By pruning channels based on polarized scaling factors from the batch normalization (BN) layer, our method dynamically adjusts model precision to match user authorization levels. Initially, we extract the scaling factor of the BN layer to assess the importance of each channel. A sparsity regularizer is then applied to filter out irrelevant scaling factors. To enhance the clarity and rationality of pruning intervals, we use a polarization technique to induce clustering of scaling factors. So we proposed multiuser hierarchical authorization using sparsity polarization pruning for model active protection. Based on the grading requirements, we prune channels corresponding to varying numbers of significant scaling factors. Access is granted at different levels depending on the precision key provided by the user, thereby ensuring a secure and efficient means of accessing the model's resources. Experimental results demonstrate that our approach achieves superior grading performance across three datasets and two different neural networks, showcasing its broad applicability. Moreover, our method achieves effective grading just by pruning a small portion of the channels, offering a high level of efficiency.</p>\\n </div>\",\"PeriodicalId\":55214,\"journal\":{\"name\":\"Concurrency and Computation-Practice & Experience\",\"volume\":\"37 9-11\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2025-04-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Concurrency and Computation-Practice & Experience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70076\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70076","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

当前,人工智能技术正以越来越大的深度和广度迅速渗透到社会经济发展的各个领域,成为驱动创新发展、赋能万千行业的重要力量,同时也带来了安全治理等挑战。深度神经网络模型的应用必须实现基于用户权限的分级访问,防止未经授权的用户访问和滥用模型,防止恶意攻击者篡改或破坏模型,从而降低模型的漏洞和安全风险。为了解决这个问题,模型提供者必须对模型实施分级授权策略,根据用户的具体需求授予用户访问模型的权限,同时确保未经授权的用户无法使用模型。实现模型分层授权的常用方法包括剪枝和加密,但现有技术需要很高的计算复杂度,而且分层效果不明显。在本文中,我们提出了一种用于分层授权的稀疏极化剪枝方法,该方法结合了稀疏正则化来过滤不重要的信道,以及将关键信道聚类到不同区间的极化技术。通过基于批量归一化(BN)层的极化缩放因子剪枝信道,我们的方法可以动态调整模型精度以匹配用户授权级别。首先,我们提取 BN 层的缩放因子来评估每个通道的重要性。然后,应用稀疏正则过滤器过滤掉不相关的缩放因子。为了提高剪枝间隔的清晰度和合理性,我们使用极化技术对缩放因子进行聚类。因此,我们提出了利用稀疏极化剪枝进行模型主动保护的多用户分级授权。根据分级要求,我们对不同数量的重要缩放因子对应的信道进行剪枝。根据用户提供的精确密钥,授予不同级别的访问权限,从而确保以安全高效的方式访问模型资源。实验结果表明,我们的方法在三个数据集和两种不同的神经网络中实现了卓越的分级性能,展示了其广泛的适用性。此外,我们的方法只需剪切一小部分通道就能实现有效分级,效率极高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multiuser Hierarchical Authorization Using Sparsity Polarization Pruning for Model Active Protection

Currently, artificial intelligence technology is rapidly penetrating into various fields of socioeconomic development with increasing depth and breadth, becoming an important force driving innovation and development, empowering thousands of industries, while also bringing challenges such as security governance. The application of deep neural network models must implement hierarchical access based on user permissions to prevent unauthorized users from accessing and abusing the model, and to prevent malicious attackers from tampering or damaging the model, thereby reducing its vulnerabilities and security risks. To address this issue, the model provider must implement a hierarchical authorization policy for the model, which can grant users access to the model based on their specific needs, while ensuring that unauthorized users cannot use the model. Common methods for implementing hierarchical authorization of models include pruning and encryption, but existing technologies require high computational complexity and have unclear hierarchical effects. In this article, we propose a sparsity polarization pruning approach for layered authorization, which combines sparsity regularization to filter insignificant channels and a polarization technique to cluster critical channels into distinct intervals. By pruning channels based on polarized scaling factors from the batch normalization (BN) layer, our method dynamically adjusts model precision to match user authorization levels. Initially, we extract the scaling factor of the BN layer to assess the importance of each channel. A sparsity regularizer is then applied to filter out irrelevant scaling factors. To enhance the clarity and rationality of pruning intervals, we use a polarization technique to induce clustering of scaling factors. So we proposed multiuser hierarchical authorization using sparsity polarization pruning for model active protection. Based on the grading requirements, we prune channels corresponding to varying numbers of significant scaling factors. Access is granted at different levels depending on the precision key provided by the user, thereby ensuring a secure and efficient means of accessing the model's resources. Experimental results demonstrate that our approach achieves superior grading performance across three datasets and two different neural networks, showcasing its broad applicability. Moreover, our method achieves effective grading just by pruning a small portion of the channels, offering a high level of efficiency.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Concurrency and Computation-Practice & Experience
Concurrency and Computation-Practice & Experience 工程技术-计算机:理论方法
CiteScore
5.00
自引率
10.00%
发文量
664
审稿时长
9.6 months
期刊介绍: Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of: Parallel and distributed computing; High-performance computing; Computational and data science; Artificial intelligence and machine learning; Big data applications, algorithms, and systems; Network science; Ontologies and semantics; Security and privacy; Cloud/edge/fog computing; Green computing; and Quantum computing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信