{"title":"Attributions of intent and moral responsibility to AI agents","authors":"Reem Ayad, Jason E. Plaks","doi":"10.1016/j.chbah.2024.100107","DOIUrl":null,"url":null,"abstract":"<div><div>Moral transactions are increasingly infused with decision input from AI agents. To what extent do observers believe that AI agents are responsible for their own actions? How do these AI agents' socio-psychological features affect observers' judgment of them when they transgress? With full factorial, between-participant designs, we presented participants with vignettes in which an AI agent contributed to a negative outcome either intentionally or unintentionally. We independently manipulated four features of the agent's mind: its adherence to moral values, autonomy, emotional self-awareness, and social connectedness. In Study 1 (<em>N</em> = 2012), AI agents that intentionally contributed to a negative outcome consistently received harsher judgments than AI agents that contributed unintentionally. For unintentional actions, socially connected AI agents received less harsh judgments than socially disconnected AI agents. In Studies 2a-c (<em>N</em> = 1507), these judgments were explained by ratings of the socially connected AI agent's ‘mind’ as less distinct from the mind of its programmers (Study 2b) and that this kind of agent also possessed less free will (Study 2c). We discuss the implications of these findings in advancing the field's understanding of the moral psychology—and design—of AI agents.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100107"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882124000677","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Moral transactions are increasingly infused with decision input from AI agents. To what extent do observers believe that AI agents are responsible for their own actions? How do these AI agents' socio-psychological features affect observers' judgment of them when they transgress? With full factorial, between-participant designs, we presented participants with vignettes in which an AI agent contributed to a negative outcome either intentionally or unintentionally. We independently manipulated four features of the agent's mind: its adherence to moral values, autonomy, emotional self-awareness, and social connectedness. In Study 1 (N = 2012), AI agents that intentionally contributed to a negative outcome consistently received harsher judgments than AI agents that contributed unintentionally. For unintentional actions, socially connected AI agents received less harsh judgments than socially disconnected AI agents. In Studies 2a-c (N = 1507), these judgments were explained by ratings of the socially connected AI agent's ‘mind’ as less distinct from the mind of its programmers (Study 2b) and that this kind of agent also possessed less free will (Study 2c). We discuss the implications of these findings in advancing the field's understanding of the moral psychology—and design—of AI agents.