二元分类中群体公平性分析可视化工具设计指南

IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
António Cruz, Teresa Salazar, Manuel Carvalho, Catarina Maçãs, Penousal Machado, Pedro Henriques Abreu
{"title":"二元分类中群体公平性分析可视化工具设计指南","authors":"António Cruz,&nbsp;Teresa Salazar,&nbsp;Manuel Carvalho,&nbsp;Catarina Maçãs,&nbsp;Penousal Machado,&nbsp;Pedro Henriques Abreu","doi":"10.1007/s10462-025-11179-w","DOIUrl":null,"url":null,"abstract":"<div><p>The use of machine learning in decision-making has become increasingly pervasive across various fields, from healthcare to finance, enabling systems to learn from data and improve their performance over time. The transformative impact of these new technologies warrants several considerations that demand the development of modern solutions through responsible artificial intelligence—the incorporation of ethical principles into the creation and deployment of AI systems. Fairness is one such principle, ensuring that machine learning algorithms do not produce biased outcomes or discriminate against any group of the population with respect to sensitive attributes, such as race or gender. In this context, visualization techniques can help identify data imbalances and disparities in model performance across different demographic groups. However, there is a lack of guidance towards clear and effective representations that support entry-level users in fairness analysis, particularly when considering that the approaches to fairness visualization can vary significantly. In this regard, the goal of this work is to present a comprehensive analysis of current tools directed at visualizing and examining group fairness in machine learning, with a focus on both data and binary classification model outcomes. These visualization tools are reviewed and discussed, concluding with the proposition of a focused set of visualization guidelines directed towards improving the comprehensibility of fairness visualizations.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 6","pages":""},"PeriodicalIF":13.9000,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11179-w.pdf","citationCount":"0","resultStr":"{\"title\":\"Guidelines for designing visualization tools for group fairness analysis in binary classification\",\"authors\":\"António Cruz,&nbsp;Teresa Salazar,&nbsp;Manuel Carvalho,&nbsp;Catarina Maçãs,&nbsp;Penousal Machado,&nbsp;Pedro Henriques Abreu\",\"doi\":\"10.1007/s10462-025-11179-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The use of machine learning in decision-making has become increasingly pervasive across various fields, from healthcare to finance, enabling systems to learn from data and improve their performance over time. The transformative impact of these new technologies warrants several considerations that demand the development of modern solutions through responsible artificial intelligence—the incorporation of ethical principles into the creation and deployment of AI systems. Fairness is one such principle, ensuring that machine learning algorithms do not produce biased outcomes or discriminate against any group of the population with respect to sensitive attributes, such as race or gender. In this context, visualization techniques can help identify data imbalances and disparities in model performance across different demographic groups. However, there is a lack of guidance towards clear and effective representations that support entry-level users in fairness analysis, particularly when considering that the approaches to fairness visualization can vary significantly. In this regard, the goal of this work is to present a comprehensive analysis of current tools directed at visualizing and examining group fairness in machine learning, with a focus on both data and binary classification model outcomes. These visualization tools are reviewed and discussed, concluding with the proposition of a focused set of visualization guidelines directed towards improving the comprehensibility of fairness visualizations.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"58 6\",\"pages\":\"\"},\"PeriodicalIF\":13.9000,\"publicationDate\":\"2025-03-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10462-025-11179-w.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-025-11179-w\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11179-w","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

从医疗保健到金融等各个领域,机器学习在决策中的应用越来越普遍,使系统能够从数据中学习并随着时间的推移提高其性能。这些新技术的变革性影响需要考虑一些问题,这些问题要求通过负责任的人工智能开发现代解决方案——将道德原则纳入人工智能系统的创建和部署中。公平就是这样一个原则,确保机器学习算法不会产生有偏见的结果,也不会在种族或性别等敏感属性方面歧视任何群体。在这种情况下,可视化技术可以帮助识别不同人口统计群体中模型性能的数据不平衡和差异。然而,对于支持初级用户进行公平性分析的清晰有效的表示,缺乏指导,特别是考虑到公平性可视化的方法可能会有很大的不同。在这方面,这项工作的目标是对当前用于可视化和检查机器学习中的群体公平的工具进行全面分析,重点关注数据和二元分类模型结果。对这些可视化工具进行了回顾和讨论,最后提出了一套重点可视化指导方针,旨在提高公平性可视化的可理解性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Guidelines for designing visualization tools for group fairness analysis in binary classification

The use of machine learning in decision-making has become increasingly pervasive across various fields, from healthcare to finance, enabling systems to learn from data and improve their performance over time. The transformative impact of these new technologies warrants several considerations that demand the development of modern solutions through responsible artificial intelligence—the incorporation of ethical principles into the creation and deployment of AI systems. Fairness is one such principle, ensuring that machine learning algorithms do not produce biased outcomes or discriminate against any group of the population with respect to sensitive attributes, such as race or gender. In this context, visualization techniques can help identify data imbalances and disparities in model performance across different demographic groups. However, there is a lack of guidance towards clear and effective representations that support entry-level users in fairness analysis, particularly when considering that the approaches to fairness visualization can vary significantly. In this regard, the goal of this work is to present a comprehensive analysis of current tools directed at visualizing and examining group fairness in machine learning, with a focus on both data and binary classification model outcomes. These visualization tools are reviewed and discussed, concluding with the proposition of a focused set of visualization guidelines directed towards improving the comprehensibility of fairness visualizations.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Artificial Intelligence Review
Artificial Intelligence Review 工程技术-计算机:人工智能
CiteScore
22.00
自引率
3.30%
发文量
194
审稿时长
5.3 months
期刊介绍: Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信