公平-关怀:对不公平缓解方法的比较评估

IF 4.3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Chiara Criscuolo , Mattia Salnitri , Davide Martinenghi
{"title":"公平-关怀:对不公平缓解方法的比较评估","authors":"Chiara Criscuolo ,&nbsp;Mattia Salnitri ,&nbsp;Davide Martinenghi","doi":"10.1016/j.infsof.2025.107898","DOIUrl":null,"url":null,"abstract":"<div><div>Bias and unfairness in Machine Learning (ML) are challenging to detect and mitigate, particularly in critical fields such as finance, hiring, and healthcare. While numerous unfairness mitigation techniques exist, most evaluation frameworks assess only a limited set of fairness metrics, primarily focusing on the trade-off between fairness and accuracy. We introduce FAIR-CARE, a new open-source and robust approach that consists of an evaluation pipeline designed for the systematic assessment of unfairness mitigation techniques. Our approach simultaneously evaluates multiple fairness and performance metrics across various ML models. We conduct a comparative analysis on healthcare datasets with diverse distributions—including target class, protected attribute, and their joint distributions—to identify the most effective mitigation technique for each processing type (pre-, in-, and post-processing). Furthermore, we determine the best-performing techniques across different datasets, fairness metrics, performance metrics, and ML models. Finally, we provide practical insights into the application of these techniques, offering actionable guidance for both researchers and practitioners.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"189 ","pages":"Article 107898"},"PeriodicalIF":4.3000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FAIR-CARE: A comparative evaluation of unfairness mitigation approaches\",\"authors\":\"Chiara Criscuolo ,&nbsp;Mattia Salnitri ,&nbsp;Davide Martinenghi\",\"doi\":\"10.1016/j.infsof.2025.107898\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Bias and unfairness in Machine Learning (ML) are challenging to detect and mitigate, particularly in critical fields such as finance, hiring, and healthcare. While numerous unfairness mitigation techniques exist, most evaluation frameworks assess only a limited set of fairness metrics, primarily focusing on the trade-off between fairness and accuracy. We introduce FAIR-CARE, a new open-source and robust approach that consists of an evaluation pipeline designed for the systematic assessment of unfairness mitigation techniques. Our approach simultaneously evaluates multiple fairness and performance metrics across various ML models. We conduct a comparative analysis on healthcare datasets with diverse distributions—including target class, protected attribute, and their joint distributions—to identify the most effective mitigation technique for each processing type (pre-, in-, and post-processing). Furthermore, we determine the best-performing techniques across different datasets, fairness metrics, performance metrics, and ML models. Finally, we provide practical insights into the application of these techniques, offering actionable guidance for both researchers and practitioners.</div></div>\",\"PeriodicalId\":54983,\"journal\":{\"name\":\"Information and Software Technology\",\"volume\":\"189 \",\"pages\":\"Article 107898\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information and Software Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S095058492500237X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Software Technology","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S095058492500237X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

机器学习(ML)中的偏见和不公平现象很难检测和缓解,特别是在金融、招聘和医疗保健等关键领域。虽然存在许多缓解不公平的技术,但大多数评估框架仅评估一组有限的公平指标,主要关注公平与准确性之间的权衡。我们介绍了FAIR-CARE,这是一种新的开源和强大的方法,由一个评估管道组成,旨在对不公平缓解技术进行系统评估。我们的方法同时评估各种ML模型的多个公平性和性能指标。我们对具有不同分布(包括目标类别、受保护属性及其联合分布)的医疗保健数据集进行了比较分析,以确定针对每种处理类型(预处理、中处理和后处理)的最有效缓解技术。此外,我们还确定了跨不同数据集、公平性指标、性能指标和ML模型的最佳性能技术。最后,我们对这些技术的应用提供了实际的见解,为研究人员和从业者提供了可操作的指导。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

FAIR-CARE: A comparative evaluation of unfairness mitigation approaches

FAIR-CARE: A comparative evaluation of unfairness mitigation approaches
Bias and unfairness in Machine Learning (ML) are challenging to detect and mitigate, particularly in critical fields such as finance, hiring, and healthcare. While numerous unfairness mitigation techniques exist, most evaluation frameworks assess only a limited set of fairness metrics, primarily focusing on the trade-off between fairness and accuracy. We introduce FAIR-CARE, a new open-source and robust approach that consists of an evaluation pipeline designed for the systematic assessment of unfairness mitigation techniques. Our approach simultaneously evaluates multiple fairness and performance metrics across various ML models. We conduct a comparative analysis on healthcare datasets with diverse distributions—including target class, protected attribute, and their joint distributions—to identify the most effective mitigation technique for each processing type (pre-, in-, and post-processing). Furthermore, we determine the best-performing techniques across different datasets, fairness metrics, performance metrics, and ML models. Finally, we provide practical insights into the application of these techniques, offering actionable guidance for both researchers and practitioners.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information and Software Technology
Information and Software Technology 工程技术-计算机:软件工程
CiteScore
9.10
自引率
7.70%
发文量
164
审稿时长
9.6 weeks
期刊介绍: Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal''s scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include: • Software management, quality and metrics, • Software processes, • Software architecture, modelling, specification, design and programming • Functional and non-functional software requirements • Software testing and verification & validation • Empirical studies of all aspects of engineering and managing software development Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information. The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信