{"title":"人工智能作为道德掩护:算法偏见如何利用心理机制使社会不平等永久化","authors":"Islam Borinca","doi":"10.1111/asap.70031","DOIUrl":null,"url":null,"abstract":"<p>Algorithmic decision-making systems are increasingly shaping critical social outcomes (e.g., hiring, lending, criminal justice, healthcare), yet technical approaches to bias mitigation ignore crucial psychological mechanisms that enable discriminatory use. To address this gap, I integrate motivated reasoning, system justification, and moral disengagement theories to argue that AI systems may function as “moral cover,” allowing users to perpetuate inequality while maintaining beliefs in their own objectivity. Users often demonstrate “selective adherence,” following algorithmic advice when it confirms stereotypes while dismissing counter-stereotypical outputs. System justification motives lead people to defend discriminatory algorithmic outcomes as legitimate, “data-driven” decisions. Moral disengagement mechanisms (including responsibility displacement, euphemistic labeling, and advantageous comparison) can enable discrimination while preserving moral self-regard. Finally, I argue that understanding AI bias as fundamentally psychological rather than merely technical demands interventions addressing these underlying psychological processes alongside algorithmic improvements.</p>","PeriodicalId":46799,"journal":{"name":"Analyses of Social Issues and Public Policy","volume":"25 3","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2025-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://spssi.onlinelibrary.wiley.com/doi/epdf/10.1111/asap.70031","citationCount":"0","resultStr":"{\"title\":\"AI as moral cover: How algorithmic bias exploits psychological mechanisms to perpetuate social inequality\",\"authors\":\"Islam Borinca\",\"doi\":\"10.1111/asap.70031\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Algorithmic decision-making systems are increasingly shaping critical social outcomes (e.g., hiring, lending, criminal justice, healthcare), yet technical approaches to bias mitigation ignore crucial psychological mechanisms that enable discriminatory use. To address this gap, I integrate motivated reasoning, system justification, and moral disengagement theories to argue that AI systems may function as “moral cover,” allowing users to perpetuate inequality while maintaining beliefs in their own objectivity. Users often demonstrate “selective adherence,” following algorithmic advice when it confirms stereotypes while dismissing counter-stereotypical outputs. System justification motives lead people to defend discriminatory algorithmic outcomes as legitimate, “data-driven” decisions. Moral disengagement mechanisms (including responsibility displacement, euphemistic labeling, and advantageous comparison) can enable discrimination while preserving moral self-regard. Finally, I argue that understanding AI bias as fundamentally psychological rather than merely technical demands interventions addressing these underlying psychological processes alongside algorithmic improvements.</p>\",\"PeriodicalId\":46799,\"journal\":{\"name\":\"Analyses of Social Issues and Public Policy\",\"volume\":\"25 3\",\"pages\":\"\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2025-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://spssi.onlinelibrary.wiley.com/doi/epdf/10.1111/asap.70031\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Analyses of Social Issues and Public Policy\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://spssi.onlinelibrary.wiley.com/doi/10.1111/asap.70031\",\"RegionNum\":4,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PSYCHOLOGY, SOCIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Analyses of Social Issues and Public Policy","FirstCategoryId":"90","ListUrlMain":"https://spssi.onlinelibrary.wiley.com/doi/10.1111/asap.70031","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY, SOCIAL","Score":null,"Total":0}
AI as moral cover: How algorithmic bias exploits psychological mechanisms to perpetuate social inequality
Algorithmic decision-making systems are increasingly shaping critical social outcomes (e.g., hiring, lending, criminal justice, healthcare), yet technical approaches to bias mitigation ignore crucial psychological mechanisms that enable discriminatory use. To address this gap, I integrate motivated reasoning, system justification, and moral disengagement theories to argue that AI systems may function as “moral cover,” allowing users to perpetuate inequality while maintaining beliefs in their own objectivity. Users often demonstrate “selective adherence,” following algorithmic advice when it confirms stereotypes while dismissing counter-stereotypical outputs. System justification motives lead people to defend discriminatory algorithmic outcomes as legitimate, “data-driven” decisions. Moral disengagement mechanisms (including responsibility displacement, euphemistic labeling, and advantageous comparison) can enable discrimination while preserving moral self-regard. Finally, I argue that understanding AI bias as fundamentally psychological rather than merely technical demands interventions addressing these underlying psychological processes alongside algorithmic improvements.
期刊介绍:
Recent articles in ASAP have examined social psychological methods in the study of economic and social justice including ageism, heterosexism, racism, sexism, status quo bias and other forms of discrimination, social problems such as climate change, extremism, homelessness, inter-group conflict, natural disasters, poverty, and terrorism, and social ideals such as democracy, empowerment, equality, health, and trust.