Hanna Campbell, Samantha Goldman, Patrick M. Markey
{"title":"Artificial intelligence and human decision making: Exploring similarities in cognitive bias","authors":"Hanna Campbell, Samantha Goldman, Patrick M. Markey","doi":"10.1016/j.chbah.2025.100138","DOIUrl":null,"url":null,"abstract":"<div><div>This research explores the extent to which Artificial Personas (APs) generated by Large Language Models (LLMs), like ChatGPT, can exhibit cognitive biases similar to those observed in humans. Four studies focusing on well-documented psychological biases were conducted: the Halo Effect, In-Group Out-Group Bias, the False Consensus Effect, and the Anchoring Effect. Each study was designed to test whether APs respond to specific scenarios consistent with typical human responses documented in psychological literature. The findings reveal that APs can replicate these biases, suggesting that APs can model some aspects of human cognitive processing. However, the effect sizes observed were unusually large, suggesting that APs replicate and exaggerate these biases, behaving more like caricatures of human cognitive behavior. This exaggeration highlights the potential of APs to magnify underlying cognitive processes but also necessitates caution in applying these findings directly to human behavior.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100138"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000222","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This research explores the extent to which Artificial Personas (APs) generated by Large Language Models (LLMs), like ChatGPT, can exhibit cognitive biases similar to those observed in humans. Four studies focusing on well-documented psychological biases were conducted: the Halo Effect, In-Group Out-Group Bias, the False Consensus Effect, and the Anchoring Effect. Each study was designed to test whether APs respond to specific scenarios consistent with typical human responses documented in psychological literature. The findings reveal that APs can replicate these biases, suggesting that APs can model some aspects of human cognitive processing. However, the effect sizes observed were unusually large, suggesting that APs replicate and exaggerate these biases, behaving more like caricatures of human cognitive behavior. This exaggeration highlights the potential of APs to magnify underlying cognitive processes but also necessitates caution in applying these findings directly to human behavior.