Representation-based fairness evaluation and bias correction robustness assessment in neural networks

IF 4.3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Qiaolin Qin , Benjamin Djian , Ettore Merlo , Heng Li , Sébastien Gambs
{"title":"Representation-based fairness evaluation and bias correction robustness assessment in neural networks","authors":"Qiaolin Qin ,&nbsp;Benjamin Djian ,&nbsp;Ettore Merlo ,&nbsp;Heng Li ,&nbsp;Sébastien Gambs","doi":"10.1016/j.infsof.2025.107876","DOIUrl":null,"url":null,"abstract":"<div><h3>Context:</h3><div>While machine learning has achieved high predictive performance in many domains, decisions may still be biased and unfair regarding specific demographic groups characterized by sensitive attributes such as gender, age, or race.</div></div><div><h3>Objectives:</h3><div>In this paper, we introduce a novel approach to assess model fairness and bias correction robustness based on Computational Profile Distance (CPD) analysis with respect to sensitive attributes.</div></div><div><h3>Methods:</h3><div>To study model fairness, we quantify the model’s representation difference using the computational profile learned from different subgroups (e.g., male and female) on the individual and group level. To analyze the robustness of bias correction outcomes, we compare the correction suggestions provided based on confidence (i.e., softmax score) and likelihood (i.e., CPD).</div></div><div><h3>Results:</h3><div>To demonstrate the potential of the proposed approach, experiments have been performed using 24 models targeting 3 datasets used in previous fairness studies. Our experiments showed that computational profile distributions can effectively address model fairness from a representation perspective. Further, the experiments indicated that confidence-based bias correction decisions can vary largely from likelihood-based ones, and we should take both suggestions into account to obtain robust outcomes.</div></div><div><h3>Conclusion:</h3><div>Demonstrated with a set of experiments, our CPD-based approaches can help users build their trust in fairness assessment and bias mitigation of AI decisions, in ethically sensitive domains such as human resources, finance, health, and more.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107876"},"PeriodicalIF":4.3000,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Software Technology","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950584925002150","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Context:

While machine learning has achieved high predictive performance in many domains, decisions may still be biased and unfair regarding specific demographic groups characterized by sensitive attributes such as gender, age, or race.

Objectives:

In this paper, we introduce a novel approach to assess model fairness and bias correction robustness based on Computational Profile Distance (CPD) analysis with respect to sensitive attributes.

Methods:

To study model fairness, we quantify the model’s representation difference using the computational profile learned from different subgroups (e.g., male and female) on the individual and group level. To analyze the robustness of bias correction outcomes, we compare the correction suggestions provided based on confidence (i.e., softmax score) and likelihood (i.e., CPD).

Results:

To demonstrate the potential of the proposed approach, experiments have been performed using 24 models targeting 3 datasets used in previous fairness studies. Our experiments showed that computational profile distributions can effectively address model fairness from a representation perspective. Further, the experiments indicated that confidence-based bias correction decisions can vary largely from likelihood-based ones, and we should take both suggestions into account to obtain robust outcomes.

Conclusion:

Demonstrated with a set of experiments, our CPD-based approaches can help users build their trust in fairness assessment and bias mitigation of AI decisions, in ethically sensitive domains such as human resources, finance, health, and more.
神经网络中基于表征的公平性评估与偏差校正鲁棒性评估
背景:虽然机器学习在许多领域取得了很高的预测性能,但对于以性别、年龄或种族等敏感属性为特征的特定人口群体,决策可能仍然存在偏见和不公平。目的:在本文中,我们介绍了一种基于敏感属性的计算轮廓距离(CPD)分析来评估模型公平性和偏差校正鲁棒性的新方法。方法:为了研究模型公平性,我们在个体和群体水平上使用从不同子群体(如男性和女性)学习的计算轮廓来量化模型的表征差异。为了分析偏倚校正结果的稳健性,我们比较了基于置信度(即softmax评分)和似然度(即CPD)提供的校正建议。结果:为了证明所提出方法的潜力,实验使用了24个模型,针对之前公平性研究中使用的3个数据集。我们的实验表明,从表示的角度来看,计算轮廓分布可以有效地解决模型公平性问题。此外,实验表明,基于置信度的偏差校正决策与基于似然的偏差校正决策差异很大,我们应该考虑这两种建议,以获得稳健的结果。结论:通过一系列实验证明,我们基于cpd的方法可以帮助用户在人力资源、金融、健康等道德敏感领域建立对人工智能决策的公平评估和偏见缓解的信任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information and Software Technology
Information and Software Technology 工程技术-计算机:软件工程
CiteScore
9.10
自引率
7.70%
发文量
164
审稿时长
9.6 weeks
期刊介绍: Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal''s scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include: • Software management, quality and metrics, • Software processes, • Software architecture, modelling, specification, design and programming • Functional and non-functional software requirements • Software testing and verification & validation • Empirical studies of all aspects of engineering and managing software development Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information. The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信