在机器学习模型中使用对抗性学习策略以确保公平性的概述

Luiz Fernando F. P. de Lima, D. R. D. Ricarte, C. Siebra
{"title":"在机器学习模型中使用对抗性学习策略以确保公平性的概述","authors":"Luiz Fernando F. P. de Lima, D. R. D. Ricarte, C. Siebra","doi":"10.1145/3535511.3535517","DOIUrl":null,"url":null,"abstract":"Context: The information age brought wide data availability, which allowed technological advances, especially when looking at machine learning (ML) algorithms that have achieved significant results for the most diverse tasks. Thus, information systems are now implementing and incorporating these algorithms, including in critical areas. Problem: Given this widespread use and already observed examples of misuse of its decisions, it is essential to consider the harm and social impacts that ML models can bring for society, for example, biased and discriminatory decisions coming from biased data or programmers. Solution: This article provides an overview of an eminent area of study on the use of adversarial learning to encode fairness constraints in ML models. IS Theory: This work is related to socio-technical theory since we consider one of the so-called socio-algorithmic problems, algorithmic discrimination. We consider a specific set of approaches to encoding fair behaviors. Method: We selected and analyzed the literature works on the use of adversarial learning for encoding fairness, aiming to answer defined research questions. Summary of Results: As main results, this work presents answers to the following research questions: What is the type of their approach? What fairness constraints did they encode into their models? What evaluation metrics did they use to assess their proposals? What datasets did they use? Contributions and Impact in the IS area: We expect to assist future research in the fairness area. Thus the article’s main contribution is to provide a reference for the community, summarizing the main topics about the adversarial learning approaches for achieving fairness.","PeriodicalId":106528,"journal":{"name":"Proceedings of the XVIII Brazilian Symposium on Information Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Overview on the Use of Adversarial Learning Strategies to Ensure Fairness in Machine Learning Models\",\"authors\":\"Luiz Fernando F. P. de Lima, D. R. D. Ricarte, C. Siebra\",\"doi\":\"10.1145/3535511.3535517\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Context: The information age brought wide data availability, which allowed technological advances, especially when looking at machine learning (ML) algorithms that have achieved significant results for the most diverse tasks. Thus, information systems are now implementing and incorporating these algorithms, including in critical areas. Problem: Given this widespread use and already observed examples of misuse of its decisions, it is essential to consider the harm and social impacts that ML models can bring for society, for example, biased and discriminatory decisions coming from biased data or programmers. Solution: This article provides an overview of an eminent area of study on the use of adversarial learning to encode fairness constraints in ML models. IS Theory: This work is related to socio-technical theory since we consider one of the so-called socio-algorithmic problems, algorithmic discrimination. We consider a specific set of approaches to encoding fair behaviors. Method: We selected and analyzed the literature works on the use of adversarial learning for encoding fairness, aiming to answer defined research questions. Summary of Results: As main results, this work presents answers to the following research questions: What is the type of their approach? What fairness constraints did they encode into their models? What evaluation metrics did they use to assess their proposals? What datasets did they use? Contributions and Impact in the IS area: We expect to assist future research in the fairness area. Thus the article’s main contribution is to provide a reference for the community, summarizing the main topics about the adversarial learning approaches for achieving fairness.\",\"PeriodicalId\":106528,\"journal\":{\"name\":\"Proceedings of the XVIII Brazilian Symposium on Information Systems\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the XVIII Brazilian Symposium on Information Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3535511.3535517\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the XVIII Brazilian Symposium on Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3535511.3535517","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景:信息时代带来了广泛的数据可用性,这使得技术进步成为可能,尤其是在研究机器学习(ML)算法时,这些算法已经在最多样化的任务中取得了显著的成果。因此,信息系统现在正在实施和纳入这些算法,包括在关键领域。问题:考虑到这种广泛的使用和已经观察到的滥用其决策的例子,有必要考虑ML模型可能给社会带来的危害和社会影响,例如,来自有偏见的数据或程序员的偏见和歧视性决策。解决方案:本文概述了在机器学习模型中使用对抗性学习来编码公平约束的一个著名研究领域。IS理论:这项工作与社会技术理论有关,因为我们考虑的是所谓的社会算法问题之一,算法歧视。我们考虑了一套特定的方法来编码公平行为。方法:选取并分析有关对抗性学习用于编码公平的文献,旨在回答明确的研究问题。结果总结:作为主要结果,这项工作提出了以下研究问题的答案:他们的方法是什么类型?他们在模型中编码了哪些公平约束?他们使用什么评估指标来评估他们的提案?他们使用了什么数据集?在信息系统领域的贡献和影响:我们期望在公平领域协助未来的研究。因此,本文的主要贡献是为社区提供参考,总结了关于实现公平的对抗性学习方法的主要主题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An Overview on the Use of Adversarial Learning Strategies to Ensure Fairness in Machine Learning Models
Context: The information age brought wide data availability, which allowed technological advances, especially when looking at machine learning (ML) algorithms that have achieved significant results for the most diverse tasks. Thus, information systems are now implementing and incorporating these algorithms, including in critical areas. Problem: Given this widespread use and already observed examples of misuse of its decisions, it is essential to consider the harm and social impacts that ML models can bring for society, for example, biased and discriminatory decisions coming from biased data or programmers. Solution: This article provides an overview of an eminent area of study on the use of adversarial learning to encode fairness constraints in ML models. IS Theory: This work is related to socio-technical theory since we consider one of the so-called socio-algorithmic problems, algorithmic discrimination. We consider a specific set of approaches to encoding fair behaviors. Method: We selected and analyzed the literature works on the use of adversarial learning for encoding fairness, aiming to answer defined research questions. Summary of Results: As main results, this work presents answers to the following research questions: What is the type of their approach? What fairness constraints did they encode into their models? What evaluation metrics did they use to assess their proposals? What datasets did they use? Contributions and Impact in the IS area: We expect to assist future research in the fairness area. Thus the article’s main contribution is to provide a reference for the community, summarizing the main topics about the adversarial learning approaches for achieving fairness.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信