Auth-Graph: GenAI-empowered attribute-masked backdoor for on-demand authorizable graph learning

IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xiao Yang , Gaolei Li , Kai Zhou , Yuni Lai , Jianhua Li
{"title":"Auth-Graph: GenAI-empowered attribute-masked backdoor for on-demand authorizable graph learning","authors":"Xiao Yang ,&nbsp;Gaolei Li ,&nbsp;Kai Zhou ,&nbsp;Yuni Lai ,&nbsp;Jianhua Li","doi":"10.1016/j.inffus.2025.103375","DOIUrl":null,"url":null,"abstract":"<div><div>Owing to the ability to fuse non-Euclidean node-edge information, Graph Learning (GL) is pervasively leveraged across applications including web recommendation, community detection, and molecular classification. Current GL paradigms extremely emphasize absolute fairness and impartiality for all clients. This limits its flexibility and adaptability in addressing specific circumstances that demand customizable model queries (<em>e.g.</em>, access control and intellectual property protection), where authorizable GL models present non-trivial obstacles in realization. Inspired by Generative Artificial Intelligence (GenAI), to overcome this limitation, we propose Auth-Graph, the first authorizable GL methodology via a built-in-model access control mechanism. Specifically, our Auth-Graph employs a generative perturbating-driven backdoor to reach authorizable access. The activation of the backdoor is exclusively confined to rightly masked and perturbed inputs, which yield accurate results, whereas all other inputs induce the GL model to produce erroneous outcomes. Moreover, to strengthen compatibility and support multi-user functionality, the masking mechanism operates correctly with a generative masker solely for authorized users possessing valid tokens, with each user’s token being uniquely distinct. Empirical results across benchmark GL models and datasets substantiate that Auth-Graph robustly prevents unauthorized access (average accuracy 3.68%) while promoting legitimate users to attain standard outputs (average accuracy drop 3.45%).</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"124 ","pages":"Article 103375"},"PeriodicalIF":14.7000,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525004488","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Owing to the ability to fuse non-Euclidean node-edge information, Graph Learning (GL) is pervasively leveraged across applications including web recommendation, community detection, and molecular classification. Current GL paradigms extremely emphasize absolute fairness and impartiality for all clients. This limits its flexibility and adaptability in addressing specific circumstances that demand customizable model queries (e.g., access control and intellectual property protection), where authorizable GL models present non-trivial obstacles in realization. Inspired by Generative Artificial Intelligence (GenAI), to overcome this limitation, we propose Auth-Graph, the first authorizable GL methodology via a built-in-model access control mechanism. Specifically, our Auth-Graph employs a generative perturbating-driven backdoor to reach authorizable access. The activation of the backdoor is exclusively confined to rightly masked and perturbed inputs, which yield accurate results, whereas all other inputs induce the GL model to produce erroneous outcomes. Moreover, to strengthen compatibility and support multi-user functionality, the masking mechanism operates correctly with a generative masker solely for authorized users possessing valid tokens, with each user’s token being uniquely distinct. Empirical results across benchmark GL models and datasets substantiate that Auth-Graph robustly prevents unauthorized access (average accuracy 3.68%) while promoting legitimate users to attain standard outputs (average accuracy drop 3.45%).

Abstract Image

授权图:基于genai的属性屏蔽后门,用于按需授权图学习
由于能够融合非欧几里得节点边缘信息,图学习(GL)在web推荐、社区检测和分子分类等应用中被广泛利用。当前的GL范例极其强调对所有客户的绝对公平和公正。这限制了它在处理需要可定制模型查询的特定环境(例如,访问控制和知识产权保护)时的灵活性和适应性,在这些环境中,可授权的GL模型在实现中存在不小的障碍。受生成式人工智能(GenAI)的启发,为了克服这一限制,我们提出了Auth-Graph,这是第一个通过内置模型访问控制机制的可授权GL方法。具体来说,我们的auc - graph采用了生成摄动驱动的后门来实现可授权访问。后门的激活仅限于正确屏蔽和扰动的输入,这些输入会产生准确的结果,而所有其他输入都会导致GL模型产生错误的结果。此外,为了加强兼容性和支持多用户功能,屏蔽机制使用生成屏蔽器正确运行,仅针对拥有有效令牌的授权用户,每个用户的令牌是唯一不同的。跨基准GL模型和数据集的经验结果证实,Auth-Graph可以有效防止未经授权的访问(平均准确率3.68%),同时促进合法用户获得标准输出(平均准确率下降3.45%)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信