Training and oversight of algorithms in social decision-making: Algorithms with prescribed selfish defaults breed selfish decisions

IF 8.9 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Computers in Human Behavior Pub Date : 2026-06-01 Epub Date: 2026-01-20 DOI:10.1016/j.chb.2026.108924
Terence D. Dores Cruz , Mateus A.M. de Lucena
{"title":"Training and oversight of algorithms in social decision-making: Algorithms with prescribed selfish defaults breed selfish decisions","authors":"Terence D. Dores Cruz ,&nbsp;Mateus A.M. de Lucena","doi":"10.1016/j.chb.2026.108924","DOIUrl":null,"url":null,"abstract":"<div><div>Human social preferences increasingly shape oversight or training data for Artificial Intelligence (AI) social decisions that affect human–human interactions. We test how algorithms with and without prescribed social preferences shape social decision-making and delegation. In an incentivised online experiment (n = 1290), participants completed a Social Value Orientation (SVO) measure as input to a decision-making algorithm, revealing their preferences for outcomes favouring oneself or an anonymous other. We manipulated whether participants (1) provided training data to an algorithm without prescribed preferences by answering the SVO without defaults or (2) oversaw algorithms with prescribed preferences by including proself/prosocial pre-selected defaults for each item. When decisions involved an algorithm, defaults were labelled as algorithmic; in a control condition, identical defaults were unlabelled. Participants’ social preferences were not significantly impacted by providing input to an algorithm without prescribed preferences (vs no defaults) nor by oversight of the algorithm with prescribed prosocial preferences (vs identical unlabelled defaults and vs the algorithm without prescribed preferences). Only providing oversight of the algorithm with prescribed proself preferences resulted in more selfish social preferences (vs the algorithm without prescribed preferences and vs the algorithm with prescribed prosocial preferences), even though participants perceived feeling less influenced by proself than prosocial defaults. Most participants delegated a second social decision-making task to the algorithm they encountered. These findings tentatively suggest that human-in-the-loop oversight, where humans can alter algorithmic suggestions, might alone fall short to address algorithmic biases, as individuals acted more selfishly when exposed to pre-existing selfish tendencies in algorithms.</div></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"179 ","pages":"Article 108924"},"PeriodicalIF":8.9000,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S074756322600021X","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/1/20 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Human social preferences increasingly shape oversight or training data for Artificial Intelligence (AI) social decisions that affect human–human interactions. We test how algorithms with and without prescribed social preferences shape social decision-making and delegation. In an incentivised online experiment (n = 1290), participants completed a Social Value Orientation (SVO) measure as input to a decision-making algorithm, revealing their preferences for outcomes favouring oneself or an anonymous other. We manipulated whether participants (1) provided training data to an algorithm without prescribed preferences by answering the SVO without defaults or (2) oversaw algorithms with prescribed preferences by including proself/prosocial pre-selected defaults for each item. When decisions involved an algorithm, defaults were labelled as algorithmic; in a control condition, identical defaults were unlabelled. Participants’ social preferences were not significantly impacted by providing input to an algorithm without prescribed preferences (vs no defaults) nor by oversight of the algorithm with prescribed prosocial preferences (vs identical unlabelled defaults and vs the algorithm without prescribed preferences). Only providing oversight of the algorithm with prescribed proself preferences resulted in more selfish social preferences (vs the algorithm without prescribed preferences and vs the algorithm with prescribed prosocial preferences), even though participants perceived feeling less influenced by proself than prosocial defaults. Most participants delegated a second social decision-making task to the algorithm they encountered. These findings tentatively suggest that human-in-the-loop oversight, where humans can alter algorithmic suggestions, might alone fall short to address algorithmic biases, as individuals acted more selfishly when exposed to pre-existing selfish tendencies in algorithms.
社会决策中算法的训练和监督:带有预设自私默认值的算法会产生自私的决策
人类的社会偏好越来越多地影响着人工智能(AI)社会决策的监督或训练数据,这些决策会影响人与人之间的互动。我们测试了有和没有规定的社会偏好的算法如何塑造社会决策和授权。在一项激励在线实验中(n = 1290),参与者完成了一项社会价值取向(SVO)测量,作为决策算法的输入,揭示了他们对自己或匿名他人的偏好。我们操纵了参与者(1)通过回答没有预设值的SVO来为没有预设偏好的算法提供训练数据,还是(2)通过为每个项目包括自我/亲社会预设值来监督具有预设偏好的算法。当决策涉及算法时,默认情况被标记为算法;在控制条件下,相同的默认值没有标记。参与者的社会偏好不会因为向没有预设偏好的算法提供输入而受到显著影响(相对于没有预设),也不会因为对具有预设亲社会偏好的算法进行监督而受到显著影响(相对于相同的未标记的预设和没有预设偏好的算法)。尽管参与者感觉受到自我的影响比受到亲社会的影响要小,但只提供对带有预设自我偏好的算法的监督会导致更自私的社会偏好(与没有预设偏好的算法和带有预设亲社会偏好的算法相比)。大多数参与者将第二项社会决策任务委托给他们遇到的算法。这些发现初步表明,人类在循环中的监督(人类可以改变算法建议)可能不足以解决算法偏见,因为当个人暴露于算法中存在的自私倾向时,他们会表现得更自私。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
19.10
自引率
4.00%
发文量
381
审稿时长
40 days
期刊介绍: Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书