Luiz Fernando F. P. de Lima, D. R. D. Ricarte, C. Siebra
{"title":"An Overview on the Use of Adversarial Learning Strategies to Ensure Fairness in Machine Learning Models","authors":"Luiz Fernando F. P. de Lima, D. R. D. Ricarte, C. Siebra","doi":"10.1145/3535511.3535517","DOIUrl":null,"url":null,"abstract":"Context: The information age brought wide data availability, which allowed technological advances, especially when looking at machine learning (ML) algorithms that have achieved significant results for the most diverse tasks. Thus, information systems are now implementing and incorporating these algorithms, including in critical areas. Problem: Given this widespread use and already observed examples of misuse of its decisions, it is essential to consider the harm and social impacts that ML models can bring for society, for example, biased and discriminatory decisions coming from biased data or programmers. Solution: This article provides an overview of an eminent area of study on the use of adversarial learning to encode fairness constraints in ML models. IS Theory: This work is related to socio-technical theory since we consider one of the so-called socio-algorithmic problems, algorithmic discrimination. We consider a specific set of approaches to encoding fair behaviors. Method: We selected and analyzed the literature works on the use of adversarial learning for encoding fairness, aiming to answer defined research questions. Summary of Results: As main results, this work presents answers to the following research questions: What is the type of their approach? What fairness constraints did they encode into their models? What evaluation metrics did they use to assess their proposals? What datasets did they use? Contributions and Impact in the IS area: We expect to assist future research in the fairness area. Thus the article’s main contribution is to provide a reference for the community, summarizing the main topics about the adversarial learning approaches for achieving fairness.","PeriodicalId":106528,"journal":{"name":"Proceedings of the XVIII Brazilian Symposium on Information Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the XVIII Brazilian Symposium on Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3535511.3535517","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Context: The information age brought wide data availability, which allowed technological advances, especially when looking at machine learning (ML) algorithms that have achieved significant results for the most diverse tasks. Thus, information systems are now implementing and incorporating these algorithms, including in critical areas. Problem: Given this widespread use and already observed examples of misuse of its decisions, it is essential to consider the harm and social impacts that ML models can bring for society, for example, biased and discriminatory decisions coming from biased data or programmers. Solution: This article provides an overview of an eminent area of study on the use of adversarial learning to encode fairness constraints in ML models. IS Theory: This work is related to socio-technical theory since we consider one of the so-called socio-algorithmic problems, algorithmic discrimination. We consider a specific set of approaches to encoding fair behaviors. Method: We selected and analyzed the literature works on the use of adversarial learning for encoding fairness, aiming to answer defined research questions. Summary of Results: As main results, this work presents answers to the following research questions: What is the type of their approach? What fairness constraints did they encode into their models? What evaluation metrics did they use to assess their proposals? What datasets did they use? Contributions and Impact in the IS area: We expect to assist future research in the fairness area. Thus the article’s main contribution is to provide a reference for the community, summarizing the main topics about the adversarial learning approaches for achieving fairness.