Xiao Yang , Gaolei Li , Kai Zhou , Yuni Lai , Jianhua Li
{"title":"Auth-Graph: GenAI-empowered attribute-masked backdoor for on-demand authorizable graph learning","authors":"Xiao Yang , Gaolei Li , Kai Zhou , Yuni Lai , Jianhua Li","doi":"10.1016/j.inffus.2025.103375","DOIUrl":null,"url":null,"abstract":"<div><div>Owing to the ability to fuse non-Euclidean node-edge information, Graph Learning (GL) is pervasively leveraged across applications including web recommendation, community detection, and molecular classification. Current GL paradigms extremely emphasize absolute fairness and impartiality for all clients. This limits its flexibility and adaptability in addressing specific circumstances that demand customizable model queries (<em>e.g.</em>, access control and intellectual property protection), where authorizable GL models present non-trivial obstacles in realization. Inspired by Generative Artificial Intelligence (GenAI), to overcome this limitation, we propose Auth-Graph, the first authorizable GL methodology via a built-in-model access control mechanism. Specifically, our Auth-Graph employs a generative perturbating-driven backdoor to reach authorizable access. The activation of the backdoor is exclusively confined to rightly masked and perturbed inputs, which yield accurate results, whereas all other inputs induce the GL model to produce erroneous outcomes. Moreover, to strengthen compatibility and support multi-user functionality, the masking mechanism operates correctly with a generative masker solely for authorized users possessing valid tokens, with each user’s token being uniquely distinct. Empirical results across benchmark GL models and datasets substantiate that Auth-Graph robustly prevents unauthorized access (average accuracy 3.68%) while promoting legitimate users to attain standard outputs (average accuracy drop 3.45%).</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"124 ","pages":"Article 103375"},"PeriodicalIF":14.7000,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525004488","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Owing to the ability to fuse non-Euclidean node-edge information, Graph Learning (GL) is pervasively leveraged across applications including web recommendation, community detection, and molecular classification. Current GL paradigms extremely emphasize absolute fairness and impartiality for all clients. This limits its flexibility and adaptability in addressing specific circumstances that demand customizable model queries (e.g., access control and intellectual property protection), where authorizable GL models present non-trivial obstacles in realization. Inspired by Generative Artificial Intelligence (GenAI), to overcome this limitation, we propose Auth-Graph, the first authorizable GL methodology via a built-in-model access control mechanism. Specifically, our Auth-Graph employs a generative perturbating-driven backdoor to reach authorizable access. The activation of the backdoor is exclusively confined to rightly masked and perturbed inputs, which yield accurate results, whereas all other inputs induce the GL model to produce erroneous outcomes. Moreover, to strengthen compatibility and support multi-user functionality, the masking mechanism operates correctly with a generative masker solely for authorized users possessing valid tokens, with each user’s token being uniquely distinct. Empirical results across benchmark GL models and datasets substantiate that Auth-Graph robustly prevents unauthorized access (average accuracy 3.68%) while promoting legitimate users to attain standard outputs (average accuracy drop 3.45%).
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.