Qiaolin Qin , Benjamin Djian , Ettore Merlo , Heng Li , Sébastien Gambs
{"title":"Representation-based fairness evaluation and bias correction robustness assessment in neural networks","authors":"Qiaolin Qin , Benjamin Djian , Ettore Merlo , Heng Li , Sébastien Gambs","doi":"10.1016/j.infsof.2025.107876","DOIUrl":null,"url":null,"abstract":"<div><h3>Context:</h3><div>While machine learning has achieved high predictive performance in many domains, decisions may still be biased and unfair regarding specific demographic groups characterized by sensitive attributes such as gender, age, or race.</div></div><div><h3>Objectives:</h3><div>In this paper, we introduce a novel approach to assess model fairness and bias correction robustness based on Computational Profile Distance (CPD) analysis with respect to sensitive attributes.</div></div><div><h3>Methods:</h3><div>To study model fairness, we quantify the model’s representation difference using the computational profile learned from different subgroups (e.g., male and female) on the individual and group level. To analyze the robustness of bias correction outcomes, we compare the correction suggestions provided based on confidence (i.e., softmax score) and likelihood (i.e., CPD).</div></div><div><h3>Results:</h3><div>To demonstrate the potential of the proposed approach, experiments have been performed using 24 models targeting 3 datasets used in previous fairness studies. Our experiments showed that computational profile distributions can effectively address model fairness from a representation perspective. Further, the experiments indicated that confidence-based bias correction decisions can vary largely from likelihood-based ones, and we should take both suggestions into account to obtain robust outcomes.</div></div><div><h3>Conclusion:</h3><div>Demonstrated with a set of experiments, our CPD-based approaches can help users build their trust in fairness assessment and bias mitigation of AI decisions, in ethically sensitive domains such as human resources, finance, health, and more.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107876"},"PeriodicalIF":4.3000,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Software Technology","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950584925002150","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Context:
While machine learning has achieved high predictive performance in many domains, decisions may still be biased and unfair regarding specific demographic groups characterized by sensitive attributes such as gender, age, or race.
Objectives:
In this paper, we introduce a novel approach to assess model fairness and bias correction robustness based on Computational Profile Distance (CPD) analysis with respect to sensitive attributes.
Methods:
To study model fairness, we quantify the model’s representation difference using the computational profile learned from different subgroups (e.g., male and female) on the individual and group level. To analyze the robustness of bias correction outcomes, we compare the correction suggestions provided based on confidence (i.e., softmax score) and likelihood (i.e., CPD).
Results:
To demonstrate the potential of the proposed approach, experiments have been performed using 24 models targeting 3 datasets used in previous fairness studies. Our experiments showed that computational profile distributions can effectively address model fairness from a representation perspective. Further, the experiments indicated that confidence-based bias correction decisions can vary largely from likelihood-based ones, and we should take both suggestions into account to obtain robust outcomes.
Conclusion:
Demonstrated with a set of experiments, our CPD-based approaches can help users build their trust in fairness assessment and bias mitigation of AI decisions, in ethically sensitive domains such as human resources, finance, health, and more.
期刊介绍:
Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal''s scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include:
• Software management, quality and metrics,
• Software processes,
• Software architecture, modelling, specification, design and programming
• Functional and non-functional software requirements
• Software testing and verification & validation
• Empirical studies of all aspects of engineering and managing software development
Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information.
The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering.