{"title":"公平计算机辅助决策中的悖论","authors":"Andrew Morgan, R. Pass","doi":"10.1145/3306618.3314242","DOIUrl":null,"url":null,"abstract":"Computer-aided decision making--where a human decision-maker is aided by a computational classifier in making a decision--is becoming increasingly prevalent. For instance, judges in at least nine states make use of algorithmic tools meant to determine \"recidivism risk scores\" for criminal defendants in sentencing, parole, or bail decisions. A subject of much recent debate is whether such algorithmic tools are \"fair\" in the sense that they do not discriminate against certain groups (e.g., races) of people. Our main result shows that for \"non-trivial\" computer-aided decision making, either the classifier must be discriminatory, or a rational decision-maker using the output of the classifier is forced to be discriminatory. We further provide a complete characterization of situations where fair computer-aided decision making is possible.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Paradoxes in Fair Computer-Aided Decision Making\",\"authors\":\"Andrew Morgan, R. Pass\",\"doi\":\"10.1145/3306618.3314242\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Computer-aided decision making--where a human decision-maker is aided by a computational classifier in making a decision--is becoming increasingly prevalent. For instance, judges in at least nine states make use of algorithmic tools meant to determine \\\"recidivism risk scores\\\" for criminal defendants in sentencing, parole, or bail decisions. A subject of much recent debate is whether such algorithmic tools are \\\"fair\\\" in the sense that they do not discriminate against certain groups (e.g., races) of people. Our main result shows that for \\\"non-trivial\\\" computer-aided decision making, either the classifier must be discriminatory, or a rational decision-maker using the output of the classifier is forced to be discriminatory. We further provide a complete characterization of situations where fair computer-aided decision making is possible.\",\"PeriodicalId\":418125,\"journal\":{\"name\":\"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society\",\"volume\":\"72 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3306618.3314242\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3306618.3314242","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Computer-aided decision making--where a human decision-maker is aided by a computational classifier in making a decision--is becoming increasingly prevalent. For instance, judges in at least nine states make use of algorithmic tools meant to determine "recidivism risk scores" for criminal defendants in sentencing, parole, or bail decisions. A subject of much recent debate is whether such algorithmic tools are "fair" in the sense that they do not discriminate against certain groups (e.g., races) of people. Our main result shows that for "non-trivial" computer-aided decision making, either the classifier must be discriminatory, or a rational decision-maker using the output of the classifier is forced to be discriminatory. We further provide a complete characterization of situations where fair computer-aided decision making is possible.