不怪你:将财务决策委托给人类和算法

Zilia Ismagilova , Matteo Ploner
{"title":"不怪你:将财务决策委托给人类和算法","authors":"Zilia Ismagilova ,&nbsp;Matteo Ploner","doi":"10.1016/j.chbah.2025.100147","DOIUrl":null,"url":null,"abstract":"<div><div>This article investigates the tendency to prioritize outcomes when evaluating decision-making processes, particularly in situations where choices are assigned to either a human or an algorithm. In our experiment, a Principal delegates a risky financial decision to an Agent, who can choose to act independently or to use an algorithm. The Principal then rewards or penalizes the Agent based on investment performance, while we manipulate the Principal’s knowledge of the outcome during the evaluation. Our results confirm a significant outcome bias, indicating that the assessment of decision effectiveness remains heavily influenced by results, whether the decision is made by the Agent or delegated to an algorithm. Furthermore, the Agent’s reliance on the algorithm and the level of investment risk do not change depending on whether rewards or penalties are decided before or after the outcome is known.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100147"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ain’t blaming you: Delegation of financial decisions to humans and algorithms\",\"authors\":\"Zilia Ismagilova ,&nbsp;Matteo Ploner\",\"doi\":\"10.1016/j.chbah.2025.100147\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This article investigates the tendency to prioritize outcomes when evaluating decision-making processes, particularly in situations where choices are assigned to either a human or an algorithm. In our experiment, a Principal delegates a risky financial decision to an Agent, who can choose to act independently or to use an algorithm. The Principal then rewards or penalizes the Agent based on investment performance, while we manipulate the Principal’s knowledge of the outcome during the evaluation. Our results confirm a significant outcome bias, indicating that the assessment of decision effectiveness remains heavily influenced by results, whether the decision is made by the Agent or delegated to an algorithm. Furthermore, the Agent’s reliance on the algorithm and the level of investment risk do not change depending on whether rewards or penalties are decided before or after the outcome is known.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"4 \",\"pages\":\"Article 100147\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000313\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000313","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文研究了在评估决策过程时优先考虑结果的倾向,尤其是在将选择权分配给人类或算法的情况下。在我们的实验中,委托人将一项有风险的财务决策委托给代理人,代理人可以选择独立行事或使用算法。然后,委托人根据投资业绩对代理人进行奖励或惩罚,而我们则在评估过程中操纵委托人对结果的了解程度。我们的结果证实了明显的结果偏差,表明无论决策是由代理人做出还是委托算法做出,决策有效性的评估都会受到结果的严重影响。此外,代理人对算法的依赖程度和投资风险水平并不会因为是在知道结果之前还是之后决定奖惩而发生变化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Ain’t blaming you: Delegation of financial decisions to humans and algorithms
This article investigates the tendency to prioritize outcomes when evaluating decision-making processes, particularly in situations where choices are assigned to either a human or an algorithm. In our experiment, a Principal delegates a risky financial decision to an Agent, who can choose to act independently or to use an algorithm. The Principal then rewards or penalizes the Agent based on investment performance, while we manipulate the Principal’s knowledge of the outcome during the evaluation. Our results confirm a significant outcome bias, indicating that the assessment of decision effectiveness remains heavily influenced by results, whether the decision is made by the Agent or delegated to an algorithm. Furthermore, the Agent’s reliance on the algorithm and the level of investment risk do not change depending on whether rewards or penalties are decided before or after the outcome is known.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信