{"title":"When artificial minds negotiate: Dark personality and the Ultimatum Game in large language models","authors":"Vinícius Ferraz , Tamas Olah , Ratin Sazedul , Robert Schmidt , Christiane Schwieren","doi":"10.1016/j.chbah.2026.100281","DOIUrl":null,"url":null,"abstract":"<div><div>Personality prompts reshape how Large Language Models propose offers in economic games—but not how they respond to them. We show this by assigning graded Dark Factor of Personality profiles to 17 LLMs in the Ultimatum Game and benchmarking their decisions against human data. As proposers, LLMs shifted from 91% fair offers at the lowest selfishness level to 17% at the highest, closely tracking human patterns but with steeper gradients. As responders, no such shift occurred: acceptance rates remained uniformly high (<span><math><mo>∼</mo></math></span>80%) regardless of personality, failing to reproduce the punishment dynamics observed in humans. This asymmetry is theoretically informative. When incentive structures are explicit, personality and framing effects are attenuated—and proposing an offer is inherently more ambiguous than responding to one. Most strikingly, personality prompts changed what responders <em>articulated</em> but not how they <em>chose</em>: model justifications showed systematic shifts in fairness language, yet behavioral output remained flat. This dissociation between stated reasoning and revealed behavior indicates that LLMs achieve linguistic compliance with personality prompts without corresponding motivational change—approximating human strategic behavior only where surface-level heuristics suffice.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100281"},"PeriodicalIF":0.0000,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882126000320","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/2/24 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Personality prompts reshape how Large Language Models propose offers in economic games—but not how they respond to them. We show this by assigning graded Dark Factor of Personality profiles to 17 LLMs in the Ultimatum Game and benchmarking their decisions against human data. As proposers, LLMs shifted from 91% fair offers at the lowest selfishness level to 17% at the highest, closely tracking human patterns but with steeper gradients. As responders, no such shift occurred: acceptance rates remained uniformly high (80%) regardless of personality, failing to reproduce the punishment dynamics observed in humans. This asymmetry is theoretically informative. When incentive structures are explicit, personality and framing effects are attenuated—and proposing an offer is inherently more ambiguous than responding to one. Most strikingly, personality prompts changed what responders articulated but not how they chose: model justifications showed systematic shifts in fairness language, yet behavioral output remained flat. This dissociation between stated reasoning and revealed behavior indicates that LLMs achieve linguistic compliance with personality prompts without corresponding motivational change—approximating human strategic behavior only where surface-level heuristics suffice.