{"title":"DefGCL:针对属性推理攻击的防御增强图对比学习","authors":"Jinyin Chen , Fanyu Ao , Wenbo Mu , Haiyang Xiong","doi":"10.1016/j.asoc.2025.113911","DOIUrl":null,"url":null,"abstract":"<div><div>Graph-structured data are prevalent in many real-world applications, such as social networks, drug discovery, and fraud detection. While Graph Neural Networks (GNNs) have shown remarkable performance by capturing rich relational patterns, their success often relies on large labeled datasets and raises growing privacy concerns. Graph Contrastive Learning (GCL) has emerged as a powerful unsupervised alternative by leveraging data augmentations to learn robust representations without labeled data. However, recent studies reveal that GCL models are particularly vulnerable to attribute inference attacks, and existing works prioritize performance improvement over privacy protection. To address this issue, we propose a <u>D</u><u>e</u><u>f</u>ense-enhanced <u>G</u>raph <u>C</u>ontrastive <u>L</u>earning, dubbed <em>DefGCL</em>, that integrates four coordinated defense strategies to enhance privacy without degrading utility. Specifically, DefGCL employs edge-based graph augmentations to limit exposure to structural attributes, selects negative samples with low attribute sensitivity scores to reduce leakage, modifies the contrastive loss to decouple graph embeddings from attributes, and injects differential privacy noise during the embedding stage. Extensive experiments on five benchmark datasets demonstrate that DefGCL achieves state-of-the-art (SOTA) performance in both privacy preservation and task accuracy. For instance, on the AIDS dataset, DefGCL reduces attribute inference accuracy by 35 % while incurring only a 0.60 % drop in main task performance. Additionally, DefGCL improves computational efficiency by reducing runtime by nearly 50 % compared to baseline methods.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"185 ","pages":"Article 113911"},"PeriodicalIF":6.6000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DefGCL: Defence-enhanced graph contrastive learning against attribute inference attacks\",\"authors\":\"Jinyin Chen , Fanyu Ao , Wenbo Mu , Haiyang Xiong\",\"doi\":\"10.1016/j.asoc.2025.113911\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Graph-structured data are prevalent in many real-world applications, such as social networks, drug discovery, and fraud detection. While Graph Neural Networks (GNNs) have shown remarkable performance by capturing rich relational patterns, their success often relies on large labeled datasets and raises growing privacy concerns. Graph Contrastive Learning (GCL) has emerged as a powerful unsupervised alternative by leveraging data augmentations to learn robust representations without labeled data. However, recent studies reveal that GCL models are particularly vulnerable to attribute inference attacks, and existing works prioritize performance improvement over privacy protection. To address this issue, we propose a <u>D</u><u>e</u><u>f</u>ense-enhanced <u>G</u>raph <u>C</u>ontrastive <u>L</u>earning, dubbed <em>DefGCL</em>, that integrates four coordinated defense strategies to enhance privacy without degrading utility. Specifically, DefGCL employs edge-based graph augmentations to limit exposure to structural attributes, selects negative samples with low attribute sensitivity scores to reduce leakage, modifies the contrastive loss to decouple graph embeddings from attributes, and injects differential privacy noise during the embedding stage. Extensive experiments on five benchmark datasets demonstrate that DefGCL achieves state-of-the-art (SOTA) performance in both privacy preservation and task accuracy. For instance, on the AIDS dataset, DefGCL reduces attribute inference accuracy by 35 % while incurring only a 0.60 % drop in main task performance. Additionally, DefGCL improves computational efficiency by reducing runtime by nearly 50 % compared to baseline methods.<span><span><sup>1</sup></span></span></div></div>\",\"PeriodicalId\":50737,\"journal\":{\"name\":\"Applied Soft Computing\",\"volume\":\"185 \",\"pages\":\"Article 113911\"},\"PeriodicalIF\":6.6000,\"publicationDate\":\"2025-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Soft Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1568494625012244\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625012244","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
DefGCL: Defence-enhanced graph contrastive learning against attribute inference attacks
Graph-structured data are prevalent in many real-world applications, such as social networks, drug discovery, and fraud detection. While Graph Neural Networks (GNNs) have shown remarkable performance by capturing rich relational patterns, their success often relies on large labeled datasets and raises growing privacy concerns. Graph Contrastive Learning (GCL) has emerged as a powerful unsupervised alternative by leveraging data augmentations to learn robust representations without labeled data. However, recent studies reveal that GCL models are particularly vulnerable to attribute inference attacks, and existing works prioritize performance improvement over privacy protection. To address this issue, we propose a Defense-enhanced Graph Contrastive Learning, dubbed DefGCL, that integrates four coordinated defense strategies to enhance privacy without degrading utility. Specifically, DefGCL employs edge-based graph augmentations to limit exposure to structural attributes, selects negative samples with low attribute sensitivity scores to reduce leakage, modifies the contrastive loss to decouple graph embeddings from attributes, and injects differential privacy noise during the embedding stage. Extensive experiments on five benchmark datasets demonstrate that DefGCL achieves state-of-the-art (SOTA) performance in both privacy preservation and task accuracy. For instance, on the AIDS dataset, DefGCL reduces attribute inference accuracy by 35 % while incurring only a 0.60 % drop in main task performance. Additionally, DefGCL improves computational efficiency by reducing runtime by nearly 50 % compared to baseline methods.1
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.