Making moral decisions with artificial agents as advisors. A fNIRS study

Eve Florianne Fabre , Damien Mouratille , Vincent Bonnemains , Grazia Pia Palmiotti , Mickael Causse
{"title":"Making moral decisions with artificial agents as advisors. A fNIRS study","authors":"Eve Florianne Fabre ,&nbsp;Damien Mouratille ,&nbsp;Vincent Bonnemains ,&nbsp;Grazia Pia Palmiotti ,&nbsp;Mickael Causse","doi":"10.1016/j.chbah.2024.100096","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial Intelligence (AI) is on the verge of impacting every domain of our lives. It is increasingly being used as an advisor to assist in making decisions. The present study aimed at investigating the influence of moral arguments provided by AI-advisors (i.e., decision aid tool) on human moral decision-making and the associated neural correlates. Participants were presented with sacrificial moral dilemmas and had to make moral decisions either by themselves (i.e., baseline run) or with AI-advisors that provided utilitarian or deontological arguments (i.e., AI-advised run), while their brain activity was measured using an <em>f</em>NIRS device. Overall, AI-advisors significantly influenced participants. Longer response times and a decrease in right dorsolateral prefrontal cortex activity were observed in response to deontological arguments than to utilitarian arguments. Being provided with deontological arguments by machines appears to have led to a decreased appraisal of the affective response to the dilemmas. This resulted in a reduced level of utilitarianism, supposedly in an attempt to avoid behaving in a less cold-blooded way than machines and preserve their (self-)image. Taken together, these results suggest that motivational power can led to a voluntary up- and down-regulation of affective processes along moral decision-making.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100096"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882124000562","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial Intelligence (AI) is on the verge of impacting every domain of our lives. It is increasingly being used as an advisor to assist in making decisions. The present study aimed at investigating the influence of moral arguments provided by AI-advisors (i.e., decision aid tool) on human moral decision-making and the associated neural correlates. Participants were presented with sacrificial moral dilemmas and had to make moral decisions either by themselves (i.e., baseline run) or with AI-advisors that provided utilitarian or deontological arguments (i.e., AI-advised run), while their brain activity was measured using an fNIRS device. Overall, AI-advisors significantly influenced participants. Longer response times and a decrease in right dorsolateral prefrontal cortex activity were observed in response to deontological arguments than to utilitarian arguments. Being provided with deontological arguments by machines appears to have led to a decreased appraisal of the affective response to the dilemmas. This resulted in a reduced level of utilitarianism, supposedly in an attempt to avoid behaving in a less cold-blooded way than machines and preserve their (self-)image. Taken together, these results suggest that motivational power can led to a voluntary up- and down-regulation of affective processes along moral decision-making.
以人工代理为顾问做出道德决策。fNIRS 研究
人工智能(AI)即将影响我们生活的每一个领域。它越来越多地被用作辅助决策的顾问。本研究旨在调查人工智能顾问(即决策辅助工具)提供的道德论据对人类道德决策的影响以及相关的神经关联。研究人员向受试者展示了牺牲性道德困境,受试者必须自己做出道德决策(即基线运行),或者在人工智能顾问提供功利主义或去道义论证的情况下做出道德决策(即人工智能顾问运行),同时使用 fNIRS 设备测量受试者的大脑活动。总体而言,人工智能顾问对参与者的影响很大。与功利主义论点相比,对去义务论点的反应时间更长,右侧背外侧前额叶皮层活动减少。机器提供的去道义论证似乎导致了对两难困境的情感反应评估的降低。这导致了功利主义水平的降低,据说是为了避免行为不如机器冷血,维护自己的(自我)形象。综上所述,这些结果表明,动机力可以导致道德决策过程中情感过程的自愿上调或下调。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信