{"title":"敏感属性在公平分类中的关键作用","authors":"M. Haeri, K. Zweig","doi":"10.1109/SSCI47803.2020.9308585","DOIUrl":null,"url":null,"abstract":"In many countries, it is illegal to make certain decisions based on sensitive attributes such as gender or race. This is because historically, sensitive attributes of individuals were exploited to abuse the rights of individuals, leading to unfair decisions. This view is extended to algorithmic decision-making systems (ADMs) where similar to humans, ADMs should not use sensitive attributes for input. We reject the extension of law from humans to machines, since contrary to humans, algorithms are explicit in their decisions, and the fairness of their decision can be studied independently of their input. The main purpose of this paper is to study and discuss the importance of using sensitive attributes in fair classification systems. Specifically, we suggest two statistical tests on the training dataset, to evaluate whether using sensitive attributes may have an impact on the quality and fairness of prospective classification algorithms. These statistical tests compare the distribution and data complexity of the training dataset between groups identified by the same value for sensitive attributes (e.g., men vs. women). We evaluated our fairness tests on several datasets. It was shown that, the removal of sensitive attributes may result in the decrease of the fairness of ADMs. The results were confirmed by designing and implementing simple classifiers on each dataset (with and without the sensitive attributes). Therefore, the use of sensitive attributes must be evaluated per dataset and algorithm, and ignoring them blindly may result in unfair ADMs.","PeriodicalId":413489,"journal":{"name":"2020 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"The Crucial Role of Sensitive Attributes in Fair Classification\",\"authors\":\"M. Haeri, K. Zweig\",\"doi\":\"10.1109/SSCI47803.2020.9308585\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In many countries, it is illegal to make certain decisions based on sensitive attributes such as gender or race. This is because historically, sensitive attributes of individuals were exploited to abuse the rights of individuals, leading to unfair decisions. This view is extended to algorithmic decision-making systems (ADMs) where similar to humans, ADMs should not use sensitive attributes for input. We reject the extension of law from humans to machines, since contrary to humans, algorithms are explicit in their decisions, and the fairness of their decision can be studied independently of their input. The main purpose of this paper is to study and discuss the importance of using sensitive attributes in fair classification systems. Specifically, we suggest two statistical tests on the training dataset, to evaluate whether using sensitive attributes may have an impact on the quality and fairness of prospective classification algorithms. These statistical tests compare the distribution and data complexity of the training dataset between groups identified by the same value for sensitive attributes (e.g., men vs. women). We evaluated our fairness tests on several datasets. It was shown that, the removal of sensitive attributes may result in the decrease of the fairness of ADMs. The results were confirmed by designing and implementing simple classifiers on each dataset (with and without the sensitive attributes). Therefore, the use of sensitive attributes must be evaluated per dataset and algorithm, and ignoring them blindly may result in unfair ADMs.\",\"PeriodicalId\":413489,\"journal\":{\"name\":\"2020 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSCI47803.2020.9308585\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI47803.2020.9308585","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The Crucial Role of Sensitive Attributes in Fair Classification
In many countries, it is illegal to make certain decisions based on sensitive attributes such as gender or race. This is because historically, sensitive attributes of individuals were exploited to abuse the rights of individuals, leading to unfair decisions. This view is extended to algorithmic decision-making systems (ADMs) where similar to humans, ADMs should not use sensitive attributes for input. We reject the extension of law from humans to machines, since contrary to humans, algorithms are explicit in their decisions, and the fairness of their decision can be studied independently of their input. The main purpose of this paper is to study and discuss the importance of using sensitive attributes in fair classification systems. Specifically, we suggest two statistical tests on the training dataset, to evaluate whether using sensitive attributes may have an impact on the quality and fairness of prospective classification algorithms. These statistical tests compare the distribution and data complexity of the training dataset between groups identified by the same value for sensitive attributes (e.g., men vs. women). We evaluated our fairness tests on several datasets. It was shown that, the removal of sensitive attributes may result in the decrease of the fairness of ADMs. The results were confirmed by designing and implementing simple classifiers on each dataset (with and without the sensitive attributes). Therefore, the use of sensitive attributes must be evaluated per dataset and algorithm, and ignoring them blindly may result in unfair ADMs.