Giordano d’Aloisio, Claudio Di Sipio, Antinisca Di Marco, Davide Di Ruscio
{"title":"基于数据集偏差症状的算法偏差早期检测:一项实证研究","authors":"Giordano d’Aloisio, Claudio Di Sipio, Antinisca Di Marco, Davide Di Ruscio","doi":"10.1016/j.infsof.2025.107905","DOIUrl":null,"url":null,"abstract":"<div><h3>Context:</h3><div>The rise of AI software has made fairness auditing essential, particularly where biased decisions have serious impacts. This entails identifying sensitive variables and calculating fairness metrics based on predictions from a baseline model. Since model training is computationally intensive, recent research focuses on early bias assessment to detect bias before extensive training starts.</div></div><div><h3>Objective:</h3><div>This paper presents an empirical study to evaluate how dataset statistics, named <em>bias symptoms</em>, can assist in the early identification of variables that may lead to bias in the system. The aim of this study is to avoid training a machine learning model before assessing – and, in case, mitigating – its bias, thus increasing the sustainability of the development process.</div></div><div><h3>Method:</h3><div>We first identify a <em>bias symptoms</em> dataset, employing 24 datasets from diverse application domains commonly used in fairness auditing. Through extensive empirical analysis, we investigate the ability of these <em>bias symptoms</em> to predict variables associated with bias under three fairness definitions.</div></div><div><h3>Results:</h3><div>Our results demonstrate that <em>bias symptoms</em> are effective in supporting early predictions of bias-inducing variables under specific fairness definitions.</div></div><div><h3>Conclusion:</h3><div>These findings offer valuable insights for practitioners and researchers, encouraging further exploration in developing methods for proactive bias mitigation involving bias symptoms.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107905"},"PeriodicalIF":4.3000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards early detection of algorithmic bias from dataset’s bias symptoms: An empirical study\",\"authors\":\"Giordano d’Aloisio, Claudio Di Sipio, Antinisca Di Marco, Davide Di Ruscio\",\"doi\":\"10.1016/j.infsof.2025.107905\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Context:</h3><div>The rise of AI software has made fairness auditing essential, particularly where biased decisions have serious impacts. This entails identifying sensitive variables and calculating fairness metrics based on predictions from a baseline model. Since model training is computationally intensive, recent research focuses on early bias assessment to detect bias before extensive training starts.</div></div><div><h3>Objective:</h3><div>This paper presents an empirical study to evaluate how dataset statistics, named <em>bias symptoms</em>, can assist in the early identification of variables that may lead to bias in the system. The aim of this study is to avoid training a machine learning model before assessing – and, in case, mitigating – its bias, thus increasing the sustainability of the development process.</div></div><div><h3>Method:</h3><div>We first identify a <em>bias symptoms</em> dataset, employing 24 datasets from diverse application domains commonly used in fairness auditing. Through extensive empirical analysis, we investigate the ability of these <em>bias symptoms</em> to predict variables associated with bias under three fairness definitions.</div></div><div><h3>Results:</h3><div>Our results demonstrate that <em>bias symptoms</em> are effective in supporting early predictions of bias-inducing variables under specific fairness definitions.</div></div><div><h3>Conclusion:</h3><div>These findings offer valuable insights for practitioners and researchers, encouraging further exploration in developing methods for proactive bias mitigation involving bias symptoms.</div></div>\",\"PeriodicalId\":54983,\"journal\":{\"name\":\"Information and Software Technology\",\"volume\":\"188 \",\"pages\":\"Article 107905\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-09-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information and Software Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950584925002447\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Software Technology","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950584925002447","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Towards early detection of algorithmic bias from dataset’s bias symptoms: An empirical study
Context:
The rise of AI software has made fairness auditing essential, particularly where biased decisions have serious impacts. This entails identifying sensitive variables and calculating fairness metrics based on predictions from a baseline model. Since model training is computationally intensive, recent research focuses on early bias assessment to detect bias before extensive training starts.
Objective:
This paper presents an empirical study to evaluate how dataset statistics, named bias symptoms, can assist in the early identification of variables that may lead to bias in the system. The aim of this study is to avoid training a machine learning model before assessing – and, in case, mitigating – its bias, thus increasing the sustainability of the development process.
Method:
We first identify a bias symptoms dataset, employing 24 datasets from diverse application domains commonly used in fairness auditing. Through extensive empirical analysis, we investigate the ability of these bias symptoms to predict variables associated with bias under three fairness definitions.
Results:
Our results demonstrate that bias symptoms are effective in supporting early predictions of bias-inducing variables under specific fairness definitions.
Conclusion:
These findings offer valuable insights for practitioners and researchers, encouraging further exploration in developing methods for proactive bias mitigation involving bias symptoms.
期刊介绍:
Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal''s scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include:
• Software management, quality and metrics,
• Software processes,
• Software architecture, modelling, specification, design and programming
• Functional and non-functional software requirements
• Software testing and verification & validation
• Empirical studies of all aspects of engineering and managing software development
Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information.
The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering.