{"title":"对人工智能的直觉判断,对道德越界的判断","authors":"Yuxin Liu, Adam Moore","doi":"10.1111/bjso.12908","DOIUrl":null,"url":null,"abstract":"<p>Automated decision-making systems have become increasingly prevalent in morally salient domains of services, introducing ethically significant consequences. In three pre-registered studies (<i>N</i> = 804), we experimentally investigated whether people's judgements of AI decisions are impacted by a belief alignment with the underlying politically salient context of AI deployment over and above any general attitudes towards AI people might hold. Participants read conservative- or liberal-framed vignettes of AI-detected statistical anomalies as a proxy for potential human prejudice in the contexts of LGBTQ+ rights and environmental protection, and responded to willingness to act on the AI verdicts, trust in AI, and perception of procedural fairness and distributive fairness of AI. Our results reveal that people's willingness to act, and judgements of trust and fairness seem to be constructed as a function of general attitudes of positivity towards AI, the moral intuitive context of AI deployment, pre-existing politico-moral beliefs, and a compatibility between the latter two. The implication is that judgements towards AI are shaped by both the belief alignment effect and general AI attitudes, suggesting a level of malleability and context dependency that challenges the potential role of AI serving as an effective mediator in morally complex situations.</p>","PeriodicalId":48304,"journal":{"name":"British Journal of Social Psychology","volume":"64 3","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bjso.12908","citationCount":"0","resultStr":"{\"title\":\"Intuitive judgements towards artificial intelligence verdicts of moral transgressions\",\"authors\":\"Yuxin Liu, Adam Moore\",\"doi\":\"10.1111/bjso.12908\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Automated decision-making systems have become increasingly prevalent in morally salient domains of services, introducing ethically significant consequences. In three pre-registered studies (<i>N</i> = 804), we experimentally investigated whether people's judgements of AI decisions are impacted by a belief alignment with the underlying politically salient context of AI deployment over and above any general attitudes towards AI people might hold. Participants read conservative- or liberal-framed vignettes of AI-detected statistical anomalies as a proxy for potential human prejudice in the contexts of LGBTQ+ rights and environmental protection, and responded to willingness to act on the AI verdicts, trust in AI, and perception of procedural fairness and distributive fairness of AI. Our results reveal that people's willingness to act, and judgements of trust and fairness seem to be constructed as a function of general attitudes of positivity towards AI, the moral intuitive context of AI deployment, pre-existing politico-moral beliefs, and a compatibility between the latter two. The implication is that judgements towards AI are shaped by both the belief alignment effect and general AI attitudes, suggesting a level of malleability and context dependency that challenges the potential role of AI serving as an effective mediator in morally complex situations.</p>\",\"PeriodicalId\":48304,\"journal\":{\"name\":\"British Journal of Social Psychology\",\"volume\":\"64 3\",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-05-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bjso.12908\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British Journal of Social Psychology\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/bjso.12908\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, SOCIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Social Psychology","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/bjso.12908","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, SOCIAL","Score":null,"Total":0}
Intuitive judgements towards artificial intelligence verdicts of moral transgressions
Automated decision-making systems have become increasingly prevalent in morally salient domains of services, introducing ethically significant consequences. In three pre-registered studies (N = 804), we experimentally investigated whether people's judgements of AI decisions are impacted by a belief alignment with the underlying politically salient context of AI deployment over and above any general attitudes towards AI people might hold. Participants read conservative- or liberal-framed vignettes of AI-detected statistical anomalies as a proxy for potential human prejudice in the contexts of LGBTQ+ rights and environmental protection, and responded to willingness to act on the AI verdicts, trust in AI, and perception of procedural fairness and distributive fairness of AI. Our results reveal that people's willingness to act, and judgements of trust and fairness seem to be constructed as a function of general attitudes of positivity towards AI, the moral intuitive context of AI deployment, pre-existing politico-moral beliefs, and a compatibility between the latter two. The implication is that judgements towards AI are shaped by both the belief alignment effect and general AI attitudes, suggesting a level of malleability and context dependency that challenges the potential role of AI serving as an effective mediator in morally complex situations.
期刊介绍:
The British Journal of Social Psychology publishes work from scholars based in all parts of the world, and manuscripts that present data on a wide range of populations inside and outside the UK. It publishes original papers in all areas of social psychology including: • social cognition • attitudes • group processes • social influence • intergroup relations • self and identity • nonverbal communication • social psychological aspects of personality, affect and emotion • language and discourse Submissions addressing these topics from a variety of approaches and methods, both quantitative and qualitative are welcomed. We publish papers of the following kinds: • empirical papers that address theoretical issues; • theoretical papers, including analyses of existing social psychological theories and presentations of theoretical innovations, extensions, or integrations; • review papers that provide an evaluation of work within a given area of social psychology and that present proposals for further research in that area; • methodological papers concerning issues that are particularly relevant to a wide range of social psychologists; • an invited agenda article as the first article in the first part of every volume. The editorial team aims to handle papers as efficiently as possible. In 2016, papers were triaged within less than a week, and the average turnaround time from receipt of the manuscript to first decision sent back to the authors was 47 days.