S. Radovanović, A. Petrović, Boris Delibasic, Milija Suknovic
{"title":"逻辑回归算法公平性的实现","authors":"S. Radovanović, A. Petrović, Boris Delibasic, Milija Suknovic","doi":"10.1109/INISTA49547.2020.9194676","DOIUrl":null,"url":null,"abstract":"Machine learning has been subject to discussion from the legal and ethical points of view in recent years. Automation of the decision-making process can lead to unethical acts with legal consequences. There are examples where the decision made by machine learning systems was unfairly biased toward some group of people. This is mainly because data used for model training were biased and thus developed a predictive model inherited that bias. Therefore, the process of learning a predictive model must be aware and account for the possible bias in the data. In this paper, we propose a modification of the logistic regression algorithm that adds one known and one novel fairness constraints into the process of model learning, thus forcing the predictive model not to create disparate impact and allow equal opportunity for every subpopulation. We demonstrate our model on real-world problems and show that a small reduction in predictive performance can yield a high improvement in disparate impact and equality of opportunity.","PeriodicalId":124632,"journal":{"name":"2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Enforcing fairness in logistic regression algorithm\",\"authors\":\"S. Radovanović, A. Petrović, Boris Delibasic, Milija Suknovic\",\"doi\":\"10.1109/INISTA49547.2020.9194676\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning has been subject to discussion from the legal and ethical points of view in recent years. Automation of the decision-making process can lead to unethical acts with legal consequences. There are examples where the decision made by machine learning systems was unfairly biased toward some group of people. This is mainly because data used for model training were biased and thus developed a predictive model inherited that bias. Therefore, the process of learning a predictive model must be aware and account for the possible bias in the data. In this paper, we propose a modification of the logistic regression algorithm that adds one known and one novel fairness constraints into the process of model learning, thus forcing the predictive model not to create disparate impact and allow equal opportunity for every subpopulation. We demonstrate our model on real-world problems and show that a small reduction in predictive performance can yield a high improvement in disparate impact and equality of opportunity.\",\"PeriodicalId\":124632,\"journal\":{\"name\":\"2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INISTA49547.2020.9194676\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INISTA49547.2020.9194676","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Enforcing fairness in logistic regression algorithm
Machine learning has been subject to discussion from the legal and ethical points of view in recent years. Automation of the decision-making process can lead to unethical acts with legal consequences. There are examples where the decision made by machine learning systems was unfairly biased toward some group of people. This is mainly because data used for model training were biased and thus developed a predictive model inherited that bias. Therefore, the process of learning a predictive model must be aware and account for the possible bias in the data. In this paper, we propose a modification of the logistic regression algorithm that adds one known and one novel fairness constraints into the process of model learning, thus forcing the predictive model not to create disparate impact and allow equal opportunity for every subpopulation. We demonstrate our model on real-world problems and show that a small reduction in predictive performance can yield a high improvement in disparate impact and equality of opportunity.