FS-GNN: Improving Fairness in Graph Neural Networks via Joint Sparsification

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jiaxu Zhao , Tianjin Huang , Shiwei Liu , Jie Yin , Yulong Pei , Meng Fang , Mykola Pechenizkiy
{"title":"FS-GNN: Improving Fairness in Graph Neural Networks via Joint Sparsification","authors":"Jiaxu Zhao ,&nbsp;Tianjin Huang ,&nbsp;Shiwei Liu ,&nbsp;Jie Yin ,&nbsp;Yulong Pei ,&nbsp;Meng Fang ,&nbsp;Mykola Pechenizkiy","doi":"10.1016/j.neucom.2025.130641","DOIUrl":null,"url":null,"abstract":"<div><div>Graph Neural Networks (GNNs) have emerged as powerful tools for analyzing graph-structured data, but their widespread adoption in critical applications is hindered by inherent biases related to sensitive attributes such as gender and race. While existing debiasing approaches typically focus on either modifying input graphs or incorporating fairness constraints into model objectives, we propose Fair Sparse GNN (FS-GNN), a novel framework that simultaneously enhances fairness and efficiency through joint sparsification of both input graphs and model architectures. Our approach iteratively identifies and removes less informative edges from input graphs while pruning redundant weights from the GNN model, guided by carefully designed fairness-aware objective functions. Through extensive experiments on real-world datasets, we demonstrate that FS-GNN achieves superior fairness metrics (reducing Statistical Parity from 7.94 to 0.6) while maintaining competitive prediction accuracy compared to state-of-the-art methods. Additionally, our theoretical analysis reveals distinct fairness implications of graph versus architecture sparsification, providing insights for future fairness-aware GNN designs. The proposed method not only advances fairness in GNNs but also offers substantial computational benefits through reduced model complexity, with FLOPs reductions ranging from 24% to 67%.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130641"},"PeriodicalIF":5.5000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092523122501313X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Graph Neural Networks (GNNs) have emerged as powerful tools for analyzing graph-structured data, but their widespread adoption in critical applications is hindered by inherent biases related to sensitive attributes such as gender and race. While existing debiasing approaches typically focus on either modifying input graphs or incorporating fairness constraints into model objectives, we propose Fair Sparse GNN (FS-GNN), a novel framework that simultaneously enhances fairness and efficiency through joint sparsification of both input graphs and model architectures. Our approach iteratively identifies and removes less informative edges from input graphs while pruning redundant weights from the GNN model, guided by carefully designed fairness-aware objective functions. Through extensive experiments on real-world datasets, we demonstrate that FS-GNN achieves superior fairness metrics (reducing Statistical Parity from 7.94 to 0.6) while maintaining competitive prediction accuracy compared to state-of-the-art methods. Additionally, our theoretical analysis reveals distinct fairness implications of graph versus architecture sparsification, providing insights for future fairness-aware GNN designs. The proposed method not only advances fairness in GNNs but also offers substantial computational benefits through reduced model complexity, with FLOPs reductions ranging from 24% to 67%.
FS-GNN:通过联合稀疏化提高图神经网络公平性
图神经网络(gnn)已经成为分析图结构数据的强大工具,但它们在关键应用中的广泛采用受到与敏感属性(如性别和种族)相关的固有偏见的阻碍。虽然现有的去偏方法通常侧重于修改输入图或将公平性约束纳入模型目标,但我们提出了公平稀疏GNN (FS-GNN),这是一种通过输入图和模型架构的联合稀疏化同时提高公平性和效率的新框架。我们的方法在精心设计的公平感知目标函数的指导下,迭代地从输入图中识别和去除信息较少的边,同时从GNN模型中修剪冗余权重。通过对真实世界数据集的大量实验,我们证明FS-GNN实现了优越的公平性指标(将统计奇偶性从7.94降低到0.6),同时与最先进的方法相比,保持了有竞争力的预测精度。此外,我们的理论分析揭示了图与架构稀疏化的明显公平性含义,为未来的公平感知GNN设计提供了见解。所提出的方法不仅提高了gnn的公平性,而且通过降低模型复杂性提供了可观的计算效益,FLOPs降低了24%至67%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信