Fairness Testing of Machine Learning Models Using Deep Reinforcement Learning

Wentao Xie, Peng Wu
{"title":"Fairness Testing of Machine Learning Models Using Deep Reinforcement Learning","authors":"Wentao Xie, Peng Wu","doi":"10.1109/TrustCom50675.2020.00029","DOIUrl":null,"url":null,"abstract":"Machine learning models play an important role for decision-making systems in areas such as hiring, insurance, and predictive policing. However, it still remains a challenge to guarantee their trustworthiness. Fairness is one of the most critical properties of these machine learning models, while individual discriminatory cases may break the trustworthiness of these systems severely. In this paper, we present a systematic approach of testing the fairness of a machine learning model, with individual discriminatory inputs generated automatically in an adaptive manner based on the state-of-the-art deep reinforcement learning techniques. Our approach can explore and exploit the input space efficiently, and find more individual discriminatory inputs within less time consumption. Case studies with typical benchmark models demonstrate the effectiveness and efficiency of our approach, compared to the state-of-the-art black-box fairness testing approaches.","PeriodicalId":221956,"journal":{"name":"2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TrustCom50675.2020.00029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

Machine learning models play an important role for decision-making systems in areas such as hiring, insurance, and predictive policing. However, it still remains a challenge to guarantee their trustworthiness. Fairness is one of the most critical properties of these machine learning models, while individual discriminatory cases may break the trustworthiness of these systems severely. In this paper, we present a systematic approach of testing the fairness of a machine learning model, with individual discriminatory inputs generated automatically in an adaptive manner based on the state-of-the-art deep reinforcement learning techniques. Our approach can explore and exploit the input space efficiently, and find more individual discriminatory inputs within less time consumption. Case studies with typical benchmark models demonstrate the effectiveness and efficiency of our approach, compared to the state-of-the-art black-box fairness testing approaches.
基于深度强化学习的机器学习模型公平性测试
机器学习模型在招聘、保险和预测性警务等领域的决策系统中发挥着重要作用。然而,如何保证它们的可信度仍然是一个挑战。公平性是这些机器学习模型最关键的属性之一,而个别的歧视性案例可能会严重破坏这些系统的可信度。在本文中,我们提出了一种系统的方法来测试机器学习模型的公平性,该模型基于最先进的深度强化学习技术,以自适应的方式自动生成个体歧视性输入。我们的方法可以有效地探索和利用输入空间,并在更短的时间内找到更多的个体歧视性输入。与最先进的黑盒公平性测试方法相比,典型基准模型的案例研究证明了我们方法的有效性和效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信