{"title":"紧急歧视:我们应该保护算法群体吗?","authors":"Jannik Zeiser","doi":"10.1111/japp.12793","DOIUrl":null,"url":null,"abstract":"<p>Discrimination is usually thought of in terms of socially salient groups, such as race or gender. Some scholars argue that the rise of algorithmic decision-making poses a challenge to this notion. Algorithms are not bound by a social view of the world. Therefore, they may not only inherit pre-existing social biases and injustices but may also discriminate based on entirely new categories that have little or no meaning to humans at all, such as ‘being born on a Tuesday’. Should this prospect change how we theorize about discrimination, and should we protect these <i>algorithmic groups</i>, as some have suggested? I argue that the phenomenon is adequately described as ‘discrimination’ when a group is <i>systematically</i> disadvantaged. At present, we lack information about whether any algorithmic group meets this criterion, so it is difficult to protect such groups. Instead, we should <i>prevent</i> algorithms from disproportionately disadvantaging certain individuals, and I outline strategies for doing so.</p>","PeriodicalId":47057,"journal":{"name":"Journal of Applied Philosophy","volume":"42 3","pages":"910-928"},"PeriodicalIF":0.9000,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/japp.12793","citationCount":"0","resultStr":"{\"title\":\"Emergent Discrimination: Should We Protect Algorithmic Groups?\",\"authors\":\"Jannik Zeiser\",\"doi\":\"10.1111/japp.12793\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Discrimination is usually thought of in terms of socially salient groups, such as race or gender. Some scholars argue that the rise of algorithmic decision-making poses a challenge to this notion. Algorithms are not bound by a social view of the world. Therefore, they may not only inherit pre-existing social biases and injustices but may also discriminate based on entirely new categories that have little or no meaning to humans at all, such as ‘being born on a Tuesday’. Should this prospect change how we theorize about discrimination, and should we protect these <i>algorithmic groups</i>, as some have suggested? I argue that the phenomenon is adequately described as ‘discrimination’ when a group is <i>systematically</i> disadvantaged. At present, we lack information about whether any algorithmic group meets this criterion, so it is difficult to protect such groups. Instead, we should <i>prevent</i> algorithms from disproportionately disadvantaging certain individuals, and I outline strategies for doing so.</p>\",\"PeriodicalId\":47057,\"journal\":{\"name\":\"Journal of Applied Philosophy\",\"volume\":\"42 3\",\"pages\":\"910-928\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2025-02-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/japp.12793\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Applied Philosophy\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/japp.12793\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Philosophy","FirstCategoryId":"98","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/japp.12793","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ETHICS","Score":null,"Total":0}
Emergent Discrimination: Should We Protect Algorithmic Groups?
Discrimination is usually thought of in terms of socially salient groups, such as race or gender. Some scholars argue that the rise of algorithmic decision-making poses a challenge to this notion. Algorithms are not bound by a social view of the world. Therefore, they may not only inherit pre-existing social biases and injustices but may also discriminate based on entirely new categories that have little or no meaning to humans at all, such as ‘being born on a Tuesday’. Should this prospect change how we theorize about discrimination, and should we protect these algorithmic groups, as some have suggested? I argue that the phenomenon is adequately described as ‘discrimination’ when a group is systematically disadvantaged. At present, we lack information about whether any algorithmic group meets this criterion, so it is difficult to protect such groups. Instead, we should prevent algorithms from disproportionately disadvantaging certain individuals, and I outline strategies for doing so.