{"title":"A Multi-view Confidence-calibrated Framework for Fair and Stable Graph Representation Learning","authors":"Xu Zhang, Liang Zhang, Bo Jin, Xinjiang Lu","doi":"10.1109/ICDM51629.2021.00194","DOIUrl":null,"url":null,"abstract":"Graph Neural Networks (GNNs) are prone to adversarial attacks and discriminatory biases. The cutting-edge studies usually adopt a perturbation-invariant consistency regularization strategy without considering the inherent prediction uncertainties, which can lead to unsatisfactory overconfidence for incorrect prediction under intent graph topology or node features attacks. Besides, operating on the complete graph structure is biased towards global level graph noise and brings severe computational issues. In this work, we develop a multi-view confidence-calibrated framework, called MCCNIFTY, for unified fair and stable graph representation learning. At its core is a multi-view uncertainty-aware node embedding learning module derived from evidential theory, including an intra-view evidence calibration, an inter-view evidence fusion, and an uncertainty-aware message passing process in a GNN architecture, which simultaneously optimizes for counterfactual fairness and stability at the sub-graph level. Experimental results on three real-world datasets demonstrate that our method is capable of adequately capturing inherent uncertainties while improving the fairness and stability via subgraph-induced multiview confidence calibration.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Data Mining (ICDM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDM51629.2021.00194","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Graph Neural Networks (GNNs) are prone to adversarial attacks and discriminatory biases. The cutting-edge studies usually adopt a perturbation-invariant consistency regularization strategy without considering the inherent prediction uncertainties, which can lead to unsatisfactory overconfidence for incorrect prediction under intent graph topology or node features attacks. Besides, operating on the complete graph structure is biased towards global level graph noise and brings severe computational issues. In this work, we develop a multi-view confidence-calibrated framework, called MCCNIFTY, for unified fair and stable graph representation learning. At its core is a multi-view uncertainty-aware node embedding learning module derived from evidential theory, including an intra-view evidence calibration, an inter-view evidence fusion, and an uncertainty-aware message passing process in a GNN architecture, which simultaneously optimizes for counterfactual fairness and stability at the sub-graph level. Experimental results on three real-world datasets demonstrate that our method is capable of adequately capturing inherent uncertainties while improving the fairness and stability via subgraph-induced multiview confidence calibration.