{"title":"习得的可信性标志的重要性","authors":"Viktoria Kainz, Justin Sulik, Sonja Utz, Torsten Enßlin","doi":"10.1111/cogs.70102","DOIUrl":null,"url":null,"abstract":"<p>A large part of how people learn about their shared world is via social information. However, in complex modern information ecosystems, it can be challenging to identify deception or filter out misinformation. This challenge is exacerbated by the existence of a dual-learning problem whereby: (1) people draw inferences about the world, given new social information; and simultaneously (2), they draw inferences about how credible various sources of information are, given social cues and previous knowledge. In this context, we investigate how social influence and individual cognitive processing interact to explain how one might lose the ability to reliably assess information. Crucially, we show how this happens even when individuals engage in rational belief updating and have access to objective cues of deception.</p><p>Using an agent-based model, the Reputation Game Simulation, we show that mere misinformation is not the problem: The dual-learning problem can be solved successfully with limited Bayesian reasoning, even in the presence of deceit. However, when certain agents consistently engage in fully deceptive behavior, intentionally distorting information to serve nonepistemic goals, this can lead nearby agents to unlearn or discount objective cues of credibility. This is an emergent delusion-like state, wherein false beliefs resist correction by true incoming information. Further, we show how such delusion-like states can be rehabilitated when agents who had previously lost the ability to discern cues of credibility are put into new, healthy—though not necessarily honest—environments.</p><p>Altogether, this suggests that correcting misinformation is not the optimal solution to epistemically toxic environments. Though difficult, socially induced cognitive biases can be repaired in healthy environments, ones where cues of credibility can be relearned in the absence of nonepistemic communication motives.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 8","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70102","citationCount":"0","resultStr":"{\"title\":\"Learned Insignificance of Credibility Signs\",\"authors\":\"Viktoria Kainz, Justin Sulik, Sonja Utz, Torsten Enßlin\",\"doi\":\"10.1111/cogs.70102\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>A large part of how people learn about their shared world is via social information. However, in complex modern information ecosystems, it can be challenging to identify deception or filter out misinformation. This challenge is exacerbated by the existence of a dual-learning problem whereby: (1) people draw inferences about the world, given new social information; and simultaneously (2), they draw inferences about how credible various sources of information are, given social cues and previous knowledge. In this context, we investigate how social influence and individual cognitive processing interact to explain how one might lose the ability to reliably assess information. Crucially, we show how this happens even when individuals engage in rational belief updating and have access to objective cues of deception.</p><p>Using an agent-based model, the Reputation Game Simulation, we show that mere misinformation is not the problem: The dual-learning problem can be solved successfully with limited Bayesian reasoning, even in the presence of deceit. However, when certain agents consistently engage in fully deceptive behavior, intentionally distorting information to serve nonepistemic goals, this can lead nearby agents to unlearn or discount objective cues of credibility. This is an emergent delusion-like state, wherein false beliefs resist correction by true incoming information. Further, we show how such delusion-like states can be rehabilitated when agents who had previously lost the ability to discern cues of credibility are put into new, healthy—though not necessarily honest—environments.</p><p>Altogether, this suggests that correcting misinformation is not the optimal solution to epistemically toxic environments. Though difficult, socially induced cognitive biases can be repaired in healthy environments, ones where cues of credibility can be relearned in the absence of nonepistemic communication motives.</p>\",\"PeriodicalId\":48349,\"journal\":{\"name\":\"Cognitive Science\",\"volume\":\"49 8\",\"pages\":\"\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2025-08-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70102\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Science\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/cogs.70102\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Science","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/cogs.70102","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
A large part of how people learn about their shared world is via social information. However, in complex modern information ecosystems, it can be challenging to identify deception or filter out misinformation. This challenge is exacerbated by the existence of a dual-learning problem whereby: (1) people draw inferences about the world, given new social information; and simultaneously (2), they draw inferences about how credible various sources of information are, given social cues and previous knowledge. In this context, we investigate how social influence and individual cognitive processing interact to explain how one might lose the ability to reliably assess information. Crucially, we show how this happens even when individuals engage in rational belief updating and have access to objective cues of deception.
Using an agent-based model, the Reputation Game Simulation, we show that mere misinformation is not the problem: The dual-learning problem can be solved successfully with limited Bayesian reasoning, even in the presence of deceit. However, when certain agents consistently engage in fully deceptive behavior, intentionally distorting information to serve nonepistemic goals, this can lead nearby agents to unlearn or discount objective cues of credibility. This is an emergent delusion-like state, wherein false beliefs resist correction by true incoming information. Further, we show how such delusion-like states can be rehabilitated when agents who had previously lost the ability to discern cues of credibility are put into new, healthy—though not necessarily honest—environments.
Altogether, this suggests that correcting misinformation is not the optimal solution to epistemically toxic environments. Though difficult, socially induced cognitive biases can be repaired in healthy environments, ones where cues of credibility can be relearned in the absence of nonepistemic communication motives.
期刊介绍:
Cognitive Science publishes articles in all areas of cognitive science, covering such topics as knowledge representation, inference, memory processes, learning, problem solving, planning, perception, natural language understanding, connectionism, brain theory, motor control, intentional systems, and other areas of interdisciplinary concern. Highest priority is given to research reports that are specifically written for a multidisciplinary audience. The audience is primarily researchers in cognitive science and its associated fields, including anthropologists, education researchers, psychologists, philosophers, linguists, computer scientists, neuroscientists, and roboticists.