{"title":"身份与公平评估的界限","authors":"Rush T. Stewart","doi":"10.1177/09516298221102972","DOIUrl":null,"url":null,"abstract":"In many assessment problems—aptitude testing, hiring decisions, appraisals of the risk of recidivism, evaluation of the credibility of testimonial sources, and so on—the fair treatment of different groups of individuals is an important goal. But individuals can be legitimately grouped in many different ways. Using a framework and fairness constraints explored in research on algorithmic fairness, I show that eliminating certain forms of bias across groups for one way of classifying individuals can make it impossible to eliminate such bias across groups for another way of dividing people up. And this point generalizes if we require merely that assessments be approximately bias-free. Moreover, even if the fairness constraints are satisfied for some given partitions of the population, the constraints can fail for the coarsest common refinement, that is, the partition generated by taking intersections of the elements of these coarser partitions. This shows that these prominent fairness constraints admit the possibility of forms of intersectional bias.","PeriodicalId":51606,"journal":{"name":"Journal of Theoretical Politics","volume":"34 1","pages":"415 - 442"},"PeriodicalIF":0.6000,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Identity and the limits of fair assessment\",\"authors\":\"Rush T. Stewart\",\"doi\":\"10.1177/09516298221102972\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In many assessment problems—aptitude testing, hiring decisions, appraisals of the risk of recidivism, evaluation of the credibility of testimonial sources, and so on—the fair treatment of different groups of individuals is an important goal. But individuals can be legitimately grouped in many different ways. Using a framework and fairness constraints explored in research on algorithmic fairness, I show that eliminating certain forms of bias across groups for one way of classifying individuals can make it impossible to eliminate such bias across groups for another way of dividing people up. And this point generalizes if we require merely that assessments be approximately bias-free. Moreover, even if the fairness constraints are satisfied for some given partitions of the population, the constraints can fail for the coarsest common refinement, that is, the partition generated by taking intersections of the elements of these coarser partitions. This shows that these prominent fairness constraints admit the possibility of forms of intersectional bias.\",\"PeriodicalId\":51606,\"journal\":{\"name\":\"Journal of Theoretical Politics\",\"volume\":\"34 1\",\"pages\":\"415 - 442\"},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2022-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Theoretical Politics\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1177/09516298221102972\",\"RegionNum\":4,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"POLITICAL SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Theoretical Politics","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/09516298221102972","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"POLITICAL SCIENCE","Score":null,"Total":0}
In many assessment problems—aptitude testing, hiring decisions, appraisals of the risk of recidivism, evaluation of the credibility of testimonial sources, and so on—the fair treatment of different groups of individuals is an important goal. But individuals can be legitimately grouped in many different ways. Using a framework and fairness constraints explored in research on algorithmic fairness, I show that eliminating certain forms of bias across groups for one way of classifying individuals can make it impossible to eliminate such bias across groups for another way of dividing people up. And this point generalizes if we require merely that assessments be approximately bias-free. Moreover, even if the fairness constraints are satisfied for some given partitions of the population, the constraints can fail for the coarsest common refinement, that is, the partition generated by taking intersections of the elements of these coarser partitions. This shows that these prominent fairness constraints admit the possibility of forms of intersectional bias.
期刊介绍:
The Journal of Theoretical Politics is an international journal one of whose principal aims is to foster the development of theory in the study of political processes. It provides a forum for the publication of original papers seeking to make genuinely theoretical contributions to the study of politics. The journal includes rigorous analytical articles on a range of theoretical topics. In particular, it focuses on new theoretical work which is broadly accessible to social scientists and contributes to our understanding of political processes. It also includes original syntheses of recent theoretical developments in diverse fields.