利用基于偏差奇偶校验得分的损失函数正则化实现深度学习预测的公平性

IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Bhanu Jain, Manfred Huber, R. Elmasri
{"title":"利用基于偏差奇偶校验得分的损失函数正则化实现深度学习预测的公平性","authors":"Bhanu Jain, Manfred Huber, R. Elmasri","doi":"10.1142/s0218213024600030","DOIUrl":null,"url":null,"abstract":"Rising acceptance of machine learning driven decision support systems underscores the need for ensuring fairness for all stakeholders. This work proposes a novel approach to increase a Neural Network model’s fairness during the training phase. We offer a frame-work to create a family of diverse fairness enhancing regularization components that can be used in tandem with the widely accepted binary-cross-entropy based accuracy loss. We use Bias Parity Score (BPS), a metric that quantifies model bias with a single value, to build loss functions pertaining to different statistical measures — even for those that may not be developed yet. We analyze behavior and impact of the newly minted regularization components on bias. We explore their impact in the realm of recidivism and census-based adult income prediction. The results illustrate that apt fairness loss functions can mitigate bias without forsaking accuracy even for imbalanced datasets.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fairness for Deep Learning Predictions Using Bias Parity Score Based Loss Function Regularization\",\"authors\":\"Bhanu Jain, Manfred Huber, R. Elmasri\",\"doi\":\"10.1142/s0218213024600030\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Rising acceptance of machine learning driven decision support systems underscores the need for ensuring fairness for all stakeholders. This work proposes a novel approach to increase a Neural Network model’s fairness during the training phase. We offer a frame-work to create a family of diverse fairness enhancing regularization components that can be used in tandem with the widely accepted binary-cross-entropy based accuracy loss. We use Bias Parity Score (BPS), a metric that quantifies model bias with a single value, to build loss functions pertaining to different statistical measures — even for those that may not be developed yet. We analyze behavior and impact of the newly minted regularization components on bias. We explore their impact in the realm of recidivism and census-based adult income prediction. The results illustrate that apt fairness loss functions can mitigate bias without forsaking accuracy even for imbalanced datasets.\",\"PeriodicalId\":50280,\"journal\":{\"name\":\"International Journal on Artificial Intelligence Tools\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2024-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal on Artificial Intelligence Tools\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1142/s0218213024600030\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal on Artificial Intelligence Tools","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1142/s0218213024600030","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

机器学习驱动的决策支持系统被越来越多的人接受,这凸显了确保所有利益相关者公平的必要性。这项研究提出了一种在训练阶段提高神经网络模型公平性的新方法。我们提供了一个框架,用于创建一系列不同的增强公平性的正则化组件,这些组件可与广为接受的基于二元交叉熵的精度损失协同使用。我们使用偏差奇偶校验得分(BPS)--一种用单一值量化模型偏差的指标--来构建与不同统计量有关的损失函数,甚至是那些尚未开发的统计量。我们分析了新推出的正则化组件的行为及其对偏差的影响。我们探讨了它们在累犯和基于人口普查的成人收入预测领域的影响。结果表明,即使在不平衡的数据集上,适当的公平损失函数也能在不牺牲准确性的情况下减轻偏差。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fairness for Deep Learning Predictions Using Bias Parity Score Based Loss Function Regularization
Rising acceptance of machine learning driven decision support systems underscores the need for ensuring fairness for all stakeholders. This work proposes a novel approach to increase a Neural Network model’s fairness during the training phase. We offer a frame-work to create a family of diverse fairness enhancing regularization components that can be used in tandem with the widely accepted binary-cross-entropy based accuracy loss. We use Bias Parity Score (BPS), a metric that quantifies model bias with a single value, to build loss functions pertaining to different statistical measures — even for those that may not be developed yet. We analyze behavior and impact of the newly minted regularization components on bias. We explore their impact in the realm of recidivism and census-based adult income prediction. The results illustrate that apt fairness loss functions can mitigate bias without forsaking accuracy even for imbalanced datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal on Artificial Intelligence Tools
International Journal on Artificial Intelligence Tools 工程技术-计算机:跨学科应用
CiteScore
2.10
自引率
9.10%
发文量
66
审稿时长
8.5 months
期刊介绍: The International Journal on Artificial Intelligence Tools (IJAIT) provides an interdisciplinary forum in which AI scientists and professionals can share their research results and report new advances on AI tools or tools that use AI. Tools refer to architectures, languages or algorithms, which constitute the means connecting theory with applications. So, IJAIT is a medium for promoting general and/or special purpose tools, which are very important for the evolution of science and manipulation of knowledge. IJAIT can also be used as a test ground for new AI tools. Topics covered by IJAIT include but are not limited to: AI in Bioinformatics, AI for Service Engineering, AI for Software Engineering, AI for Ubiquitous Computing, AI for Web Intelligence Applications, AI Parallel Processing Tools (hardware/software), AI Programming Languages, AI Tools for CAD and VLSI Analysis/Design/Testing, AI Tools for Computer Vision and Speech Understanding, AI Tools for Multimedia, Cognitive Informatics, Data Mining and Machine Learning Tools, Heuristic and AI Planning Strategies and Tools, Image Understanding, Integrated/Hybrid AI Approaches, Intelligent System Architectures, Knowledge-Based/Expert Systems, Knowledge Management and Processing Tools, Knowledge Representation Languages, Natural Language Understanding, Neural Networks for AI, Object-Oriented Programming for AI, Reasoning and Evolution of Knowledge Bases, Self-Healing and Autonomous Systems, and Software Engineering for AI.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信