{"title":"竞争还是不竞争:绩效目标如何塑造人类-人工智能和人与人之间的合作","authors":"Spatola Nicolas","doi":"10.1016/j.chbah.2025.100169","DOIUrl":null,"url":null,"abstract":"<div><div>Due to generative AI, and particularly algorithms using large language models, people's use of algorithms as recommendation tools is increasing at an unprecedented pace. While these tools are used in both private and work contexts, less is known about how the motivational context surrounding algorithm use impacts reliance patterns. This research examined how competitive versus non-performance goals affect adherence to algorithmic versus human recommendation. In Experiment 1, participants completed Raven's Matrices with optional algorithm assistance. Framing the task as a competitive test increased reliance on the algorithm compared to a control condition. This effect was mediated by heightened perceived usefulness but not accuracy. Experiment 2 introduced human assistance alongside the algorithm assistance from Experiment 1. Performance (compared to control) goals increased reliance on the algorithm over peer assistance by selectively enhancing the perceived usefulness of the algorithm versus human assistance. These results demonstrate how setting goals may influence the preference to rely on algorithmic or human assistance and particularly how performance goal contexts catalyze a situation in which participants are more prone to rely on algorithms compared to peer recommendation. These results are discussed with regard to social goals and social cognition in competitive settings with the aim of elucidating how motivational framing shapes human-AI collaborative dynamics, informing responsible system design.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100169"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"To Be competitive or not to be competitive: How performance goals shape human-AI and human-human collaboration\",\"authors\":\"Spatola Nicolas\",\"doi\":\"10.1016/j.chbah.2025.100169\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Due to generative AI, and particularly algorithms using large language models, people's use of algorithms as recommendation tools is increasing at an unprecedented pace. While these tools are used in both private and work contexts, less is known about how the motivational context surrounding algorithm use impacts reliance patterns. This research examined how competitive versus non-performance goals affect adherence to algorithmic versus human recommendation. In Experiment 1, participants completed Raven's Matrices with optional algorithm assistance. Framing the task as a competitive test increased reliance on the algorithm compared to a control condition. This effect was mediated by heightened perceived usefulness but not accuracy. Experiment 2 introduced human assistance alongside the algorithm assistance from Experiment 1. Performance (compared to control) goals increased reliance on the algorithm over peer assistance by selectively enhancing the perceived usefulness of the algorithm versus human assistance. These results demonstrate how setting goals may influence the preference to rely on algorithmic or human assistance and particularly how performance goal contexts catalyze a situation in which participants are more prone to rely on algorithms compared to peer recommendation. These results are discussed with regard to social goals and social cognition in competitive settings with the aim of elucidating how motivational framing shapes human-AI collaborative dynamics, informing responsible system design.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"5 \",\"pages\":\"Article 100169\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000532\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000532","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
To Be competitive or not to be competitive: How performance goals shape human-AI and human-human collaboration
Due to generative AI, and particularly algorithms using large language models, people's use of algorithms as recommendation tools is increasing at an unprecedented pace. While these tools are used in both private and work contexts, less is known about how the motivational context surrounding algorithm use impacts reliance patterns. This research examined how competitive versus non-performance goals affect adherence to algorithmic versus human recommendation. In Experiment 1, participants completed Raven's Matrices with optional algorithm assistance. Framing the task as a competitive test increased reliance on the algorithm compared to a control condition. This effect was mediated by heightened perceived usefulness but not accuracy. Experiment 2 introduced human assistance alongside the algorithm assistance from Experiment 1. Performance (compared to control) goals increased reliance on the algorithm over peer assistance by selectively enhancing the perceived usefulness of the algorithm versus human assistance. These results demonstrate how setting goals may influence the preference to rely on algorithmic or human assistance and particularly how performance goal contexts catalyze a situation in which participants are more prone to rely on algorithms compared to peer recommendation. These results are discussed with regard to social goals and social cognition in competitive settings with the aim of elucidating how motivational framing shapes human-AI collaborative dynamics, informing responsible system design.