Fair Graph Mining

Jian Kang, Hanghang Tong
{"title":"Fair Graph Mining","authors":"Jian Kang, Hanghang Tong","doi":"10.1145/3459637.3482030","DOIUrl":null,"url":null,"abstract":"In today's increasingly connected world, graph mining plays a pivotal role in many real-world application domains, including social network analysis, recommendations, marketing and financial security. Tremendous efforts have been made to develop a wide range of computational models. However, recent studies have revealed that many widely-applied graph mining models could suffer from potential discrimination. Fairness on graph mining aims to develop strategies in order to mitigate bias introduced/amplified during the mining process. The unique challenges of enforcing fairness on graph mining include (1) theoretical challenge on non-IID nature of graph data, which may invalidate the basic assumption behind many existing studies in fair machine learning, and (2) algorithmic challenge on the dilemma of balancing model accuracy and fairness. This tutorial aims to (1) present a comprehensive review of state-of-the-art techniques in fairness on graph mining and (2) identify the open challenges and future trends. In particular, we start with reviewing the background, problem definitions, unique challenges and related problems; then we will focus on an in-depth overview of (1) recent techniques in enforcing group fairness, individual fairness and other fairness notions in the context of graph mining, and (2) future directions in studying algorithmic fairness on graphs. We believe this tutorial could be attractive to researchers and practitioners in areas including data mining, artificial intelligence, social science and beneficial to a plethora of real-world application domains.","PeriodicalId":405296,"journal":{"name":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3459637.3482030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21

Abstract

In today's increasingly connected world, graph mining plays a pivotal role in many real-world application domains, including social network analysis, recommendations, marketing and financial security. Tremendous efforts have been made to develop a wide range of computational models. However, recent studies have revealed that many widely-applied graph mining models could suffer from potential discrimination. Fairness on graph mining aims to develop strategies in order to mitigate bias introduced/amplified during the mining process. The unique challenges of enforcing fairness on graph mining include (1) theoretical challenge on non-IID nature of graph data, which may invalidate the basic assumption behind many existing studies in fair machine learning, and (2) algorithmic challenge on the dilemma of balancing model accuracy and fairness. This tutorial aims to (1) present a comprehensive review of state-of-the-art techniques in fairness on graph mining and (2) identify the open challenges and future trends. In particular, we start with reviewing the background, problem definitions, unique challenges and related problems; then we will focus on an in-depth overview of (1) recent techniques in enforcing group fairness, individual fairness and other fairness notions in the context of graph mining, and (2) future directions in studying algorithmic fairness on graphs. We believe this tutorial could be attractive to researchers and practitioners in areas including data mining, artificial intelligence, social science and beneficial to a plethora of real-world application domains.
公平图挖掘
在当今日益互联的世界中,图挖掘在许多现实世界的应用领域中发挥着关键作用,包括社交网络分析、推荐、营销和金融安全。人们已经做出了巨大的努力来开发各种各样的计算模型。然而,最近的研究表明,许多广泛应用的图挖掘模型可能遭受潜在的歧视。图挖掘的公平性旨在制定策略,以减轻在挖掘过程中引入/放大的偏见。在图挖掘中实现公平性的独特挑战包括:(1)对图数据非iid性质的理论挑战,这可能会使许多现有公平机器学习研究背后的基本假设失效;(2)对平衡模型准确性和公平性的算法挑战。本教程旨在(1)对图挖掘公平性方面的最新技术进行全面回顾,(2)确定开放的挑战和未来趋势。特别地,我们从回顾背景、问题定义、独特的挑战和相关问题开始;然后,我们将重点深入概述(1)在图挖掘背景下执行群体公平、个人公平和其他公平概念的最新技术,以及(2)在图上研究算法公平的未来方向。我们相信本教程对数据挖掘、人工智能、社会科学等领域的研究人员和实践者具有吸引力,并有利于大量现实世界的应用领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信