竞争还是不竞争:绩效目标如何塑造人类-人工智能和人与人之间的合作

Spatola Nicolas
{"title":"竞争还是不竞争:绩效目标如何塑造人类-人工智能和人与人之间的合作","authors":"Spatola Nicolas","doi":"10.1016/j.chbah.2025.100169","DOIUrl":null,"url":null,"abstract":"<div><div>Due to generative AI, and particularly algorithms using large language models, people's use of algorithms as recommendation tools is increasing at an unprecedented pace. While these tools are used in both private and work contexts, less is known about how the motivational context surrounding algorithm use impacts reliance patterns. This research examined how competitive versus non-performance goals affect adherence to algorithmic versus human recommendation. In Experiment 1, participants completed Raven's Matrices with optional algorithm assistance. Framing the task as a competitive test increased reliance on the algorithm compared to a control condition. This effect was mediated by heightened perceived usefulness but not accuracy. Experiment 2 introduced human assistance alongside the algorithm assistance from Experiment 1. Performance (compared to control) goals increased reliance on the algorithm over peer assistance by selectively enhancing the perceived usefulness of the algorithm versus human assistance. These results demonstrate how setting goals may influence the preference to rely on algorithmic or human assistance and particularly how performance goal contexts catalyze a situation in which participants are more prone to rely on algorithms compared to peer recommendation. These results are discussed with regard to social goals and social cognition in competitive settings with the aim of elucidating how motivational framing shapes human-AI collaborative dynamics, informing responsible system design.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100169"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"To Be competitive or not to be competitive: How performance goals shape human-AI and human-human collaboration\",\"authors\":\"Spatola Nicolas\",\"doi\":\"10.1016/j.chbah.2025.100169\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Due to generative AI, and particularly algorithms using large language models, people's use of algorithms as recommendation tools is increasing at an unprecedented pace. While these tools are used in both private and work contexts, less is known about how the motivational context surrounding algorithm use impacts reliance patterns. This research examined how competitive versus non-performance goals affect adherence to algorithmic versus human recommendation. In Experiment 1, participants completed Raven's Matrices with optional algorithm assistance. Framing the task as a competitive test increased reliance on the algorithm compared to a control condition. This effect was mediated by heightened perceived usefulness but not accuracy. Experiment 2 introduced human assistance alongside the algorithm assistance from Experiment 1. Performance (compared to control) goals increased reliance on the algorithm over peer assistance by selectively enhancing the perceived usefulness of the algorithm versus human assistance. These results demonstrate how setting goals may influence the preference to rely on algorithmic or human assistance and particularly how performance goal contexts catalyze a situation in which participants are more prone to rely on algorithms compared to peer recommendation. These results are discussed with regard to social goals and social cognition in competitive settings with the aim of elucidating how motivational framing shapes human-AI collaborative dynamics, informing responsible system design.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"5 \",\"pages\":\"Article 100169\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000532\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000532","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于生成式人工智能,特别是使用大型语言模型的算法,人们使用算法作为推荐工具的速度正在以前所未有的速度增长。虽然这些工具在私人和工作环境中都有使用,但围绕算法使用的动机环境如何影响依赖模式却知之甚少。这项研究考察了竞争性目标和非绩效目标如何影响算法和人类推荐的依从性。在实验1中,参与者在可选算法辅助下完成Raven’s Matrices。与控制条件相比,将任务设置为竞争性测试增加了对算法的依赖。这种效应是由感知有用性的提高介导的,而不是准确性。实验2在实验1的算法辅助下引入了人工辅助。性能目标(与控制目标相比)通过选择性地增强算法相对于人类帮助的感知有用性,增加了对算法的依赖。这些结果证明了设定目标如何影响依赖算法或人工帮助的偏好,特别是绩效目标背景如何催化参与者更倾向于依赖算法而不是同伴推荐的情况。这些结果讨论了竞争环境中的社会目标和社会认知,目的是阐明动机框架如何塑造人类-人工智能协作动态,为负责任的系统设计提供信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
To Be competitive or not to be competitive: How performance goals shape human-AI and human-human collaboration
Due to generative AI, and particularly algorithms using large language models, people's use of algorithms as recommendation tools is increasing at an unprecedented pace. While these tools are used in both private and work contexts, less is known about how the motivational context surrounding algorithm use impacts reliance patterns. This research examined how competitive versus non-performance goals affect adherence to algorithmic versus human recommendation. In Experiment 1, participants completed Raven's Matrices with optional algorithm assistance. Framing the task as a competitive test increased reliance on the algorithm compared to a control condition. This effect was mediated by heightened perceived usefulness but not accuracy. Experiment 2 introduced human assistance alongside the algorithm assistance from Experiment 1. Performance (compared to control) goals increased reliance on the algorithm over peer assistance by selectively enhancing the perceived usefulness of the algorithm versus human assistance. These results demonstrate how setting goals may influence the preference to rely on algorithmic or human assistance and particularly how performance goal contexts catalyze a situation in which participants are more prone to rely on algorithms compared to peer recommendation. These results are discussed with regard to social goals and social cognition in competitive settings with the aim of elucidating how motivational framing shapes human-AI collaborative dynamics, informing responsible system design.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信