Jiaxu Zhao , Tianjin Huang , Shiwei Liu , Jie Yin , Yulong Pei , Meng Fang , Mykola Pechenizkiy
{"title":"FS-GNN: Improving Fairness in Graph Neural Networks via Joint Sparsification","authors":"Jiaxu Zhao , Tianjin Huang , Shiwei Liu , Jie Yin , Yulong Pei , Meng Fang , Mykola Pechenizkiy","doi":"10.1016/j.neucom.2025.130641","DOIUrl":null,"url":null,"abstract":"<div><div>Graph Neural Networks (GNNs) have emerged as powerful tools for analyzing graph-structured data, but their widespread adoption in critical applications is hindered by inherent biases related to sensitive attributes such as gender and race. While existing debiasing approaches typically focus on either modifying input graphs or incorporating fairness constraints into model objectives, we propose Fair Sparse GNN (FS-GNN), a novel framework that simultaneously enhances fairness and efficiency through joint sparsification of both input graphs and model architectures. Our approach iteratively identifies and removes less informative edges from input graphs while pruning redundant weights from the GNN model, guided by carefully designed fairness-aware objective functions. Through extensive experiments on real-world datasets, we demonstrate that FS-GNN achieves superior fairness metrics (reducing Statistical Parity from 7.94 to 0.6) while maintaining competitive prediction accuracy compared to state-of-the-art methods. Additionally, our theoretical analysis reveals distinct fairness implications of graph versus architecture sparsification, providing insights for future fairness-aware GNN designs. The proposed method not only advances fairness in GNNs but also offers substantial computational benefits through reduced model complexity, with FLOPs reductions ranging from 24% to 67%.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130641"},"PeriodicalIF":5.5000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092523122501313X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Graph Neural Networks (GNNs) have emerged as powerful tools for analyzing graph-structured data, but their widespread adoption in critical applications is hindered by inherent biases related to sensitive attributes such as gender and race. While existing debiasing approaches typically focus on either modifying input graphs or incorporating fairness constraints into model objectives, we propose Fair Sparse GNN (FS-GNN), a novel framework that simultaneously enhances fairness and efficiency through joint sparsification of both input graphs and model architectures. Our approach iteratively identifies and removes less informative edges from input graphs while pruning redundant weights from the GNN model, guided by carefully designed fairness-aware objective functions. Through extensive experiments on real-world datasets, we demonstrate that FS-GNN achieves superior fairness metrics (reducing Statistical Parity from 7.94 to 0.6) while maintaining competitive prediction accuracy compared to state-of-the-art methods. Additionally, our theoretical analysis reveals distinct fairness implications of graph versus architecture sparsification, providing insights for future fairness-aware GNN designs. The proposed method not only advances fairness in GNNs but also offers substantial computational benefits through reduced model complexity, with FLOPs reductions ranging from 24% to 67%.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.