{"title":"Training and oversight of algorithms in social decision-making: Algorithms with prescribed selfish defaults breed selfish decisions","authors":"Terence D. Dores Cruz , Mateus A.M. de Lucena","doi":"10.1016/j.chb.2026.108924","DOIUrl":null,"url":null,"abstract":"<div><div>Human social preferences increasingly shape oversight or training data for Artificial Intelligence (AI) social decisions that affect human–human interactions. We test how algorithms with and without prescribed social preferences shape social decision-making and delegation. In an incentivised online experiment (n = 1290), participants completed a Social Value Orientation (SVO) measure as input to a decision-making algorithm, revealing their preferences for outcomes favouring oneself or an anonymous other. We manipulated whether participants (1) provided training data to an algorithm without prescribed preferences by answering the SVO without defaults or (2) oversaw algorithms with prescribed preferences by including proself/prosocial pre-selected defaults for each item. When decisions involved an algorithm, defaults were labelled as algorithmic; in a control condition, identical defaults were unlabelled. Participants’ social preferences were not significantly impacted by providing input to an algorithm without prescribed preferences (vs no defaults) nor by oversight of the algorithm with prescribed prosocial preferences (vs identical unlabelled defaults and vs the algorithm without prescribed preferences). Only providing oversight of the algorithm with prescribed proself preferences resulted in more selfish social preferences (vs the algorithm without prescribed preferences and vs the algorithm with prescribed prosocial preferences), even though participants perceived feeling less influenced by proself than prosocial defaults. Most participants delegated a second social decision-making task to the algorithm they encountered. These findings tentatively suggest that human-in-the-loop oversight, where humans can alter algorithmic suggestions, might alone fall short to address algorithmic biases, as individuals acted more selfishly when exposed to pre-existing selfish tendencies in algorithms.</div></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"179 ","pages":"Article 108924"},"PeriodicalIF":8.9000,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S074756322600021X","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/1/20 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
Human social preferences increasingly shape oversight or training data for Artificial Intelligence (AI) social decisions that affect human–human interactions. We test how algorithms with and without prescribed social preferences shape social decision-making and delegation. In an incentivised online experiment (n = 1290), participants completed a Social Value Orientation (SVO) measure as input to a decision-making algorithm, revealing their preferences for outcomes favouring oneself or an anonymous other. We manipulated whether participants (1) provided training data to an algorithm without prescribed preferences by answering the SVO without defaults or (2) oversaw algorithms with prescribed preferences by including proself/prosocial pre-selected defaults for each item. When decisions involved an algorithm, defaults were labelled as algorithmic; in a control condition, identical defaults were unlabelled. Participants’ social preferences were not significantly impacted by providing input to an algorithm without prescribed preferences (vs no defaults) nor by oversight of the algorithm with prescribed prosocial preferences (vs identical unlabelled defaults and vs the algorithm without prescribed preferences). Only providing oversight of the algorithm with prescribed proself preferences resulted in more selfish social preferences (vs the algorithm without prescribed preferences and vs the algorithm with prescribed prosocial preferences), even though participants perceived feeling less influenced by proself than prosocial defaults. Most participants delegated a second social decision-making task to the algorithm they encountered. These findings tentatively suggest that human-in-the-loop oversight, where humans can alter algorithmic suggestions, might alone fall short to address algorithmic biases, as individuals acted more selfishly when exposed to pre-existing selfish tendencies in algorithms.
期刊介绍:
Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.