CognitionPub Date : 2026-06-01Epub Date: 2026-01-09DOI: 10.1016/j.cognition.2026.106439
Shachar Hochman , Mattan S. Ben-Shachar , Roi Cohen Kadosh , Avishai Henik
{"title":"A novel task for measuring numerical bias among adults","authors":"Shachar Hochman , Mattan S. Ben-Shachar , Roi Cohen Kadosh , Avishai Henik","doi":"10.1016/j.cognition.2026.106439","DOIUrl":"10.1016/j.cognition.2026.106439","url":null,"abstract":"<div><div>Numerical bias is the spontaneous tendency to base decisions on numerical rather than equally available non-numerical information. We introduce the Congruent Learning–Incongruent Probe (CLIP) task, a computerised paradigm for indexing numerical bias in adults. The task presents digit pairs that vary in numerical value and physical size, organised into blocks. In feedback-based learning trials, digits are congruent (larger number in larger font) and participants learn which stimulus is “correct” for that block. In subsequent no-feedback probe trials (test trials), the same pairs are presented incongruently, revealing whether choices are spontaneously driven by numerical or physical dimensions. A sample of 129 adults completed a multi-day battery to validate the CLIP task. Drift–diffusion modelling indicated substantial individual differences in numerical bias. Higher numerical bias correlated positively with maths fluency and quantitative reasoning, paralleling child findings on spontaneous focus on numerosity (SFON) and maths competence. To establish convergent validity, we also administered a numerical Stroop task that requires suppressing numerical information; individuals with stronger numerical bias showed larger interference and facilitation effects. These findings validate the CLIP task as a reliable measure of numerical bias and, more broadly, highlight how variability in spontaneous numerical processing shapes cognitive-control demands, illuminating the interplay between domain-specific biases and executive function.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106439"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2026-06-01Epub Date: 2026-02-12DOI: 10.1016/j.cognition.2026.106474
Irene Canudas-Grabolosa , Madeline Quam , Marie Coppola , Jesse Snedeker , Annemarie Kocab
{"title":"Is a linguistic model needed to build abstract event representations?","authors":"Irene Canudas-Grabolosa , Madeline Quam , Marie Coppola , Jesse Snedeker , Annemarie Kocab","doi":"10.1016/j.cognition.2026.106474","DOIUrl":"10.1016/j.cognition.2026.106474","url":null,"abstract":"<div><div>A central question in cognitive development is whether language simply expresses pre-existing event concepts or plays a critical role in their construction and use. Recent findings from studies with infants, preschoolers and adults have raised the possibility that generic two-place relations (e.g., <em>cats push rabbits</em>) can only be represented when people have access to the transitive sentences that express them. This suggests that these concepts could be constructed as we acquire a pre-existing, external language that expresses them. To explore this hypothesis, we tested whether adult homesigners—individuals without exposure to a pre-existing language—could construct such concepts in a nonverbal imitation task. Participants viewed three instances of a given generic event (with either one or two participants), then they were given new exemplars of the same kinds (e.g., new rabbit and cat) and prompted to act. Their performance was compared to English-speaking five-year-olds. Both groups performed well in the critical two-participant condition, consistently mapping figurines of the right kind to each role. There were no group or event-type differences. Thus, homesigners have the representational resources needed to support role binding. These findings demonstrate that abstract representations of generic two-place relations can emerge without exposure to a language that models these constructions or a set of shared linguistic conventions.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106474"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146173585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2026-06-01Epub Date: 2026-02-04DOI: 10.1016/j.cognition.2026.106455
Aditya Prakash, Andrew Hollingworth
{"title":"Dissociations and interactions between attention guidance from negative templates maintained in visual working memory and long-term memory","authors":"Aditya Prakash, Andrew Hollingworth","doi":"10.1016/j.cognition.2026.106455","DOIUrl":"10.1016/j.cognition.2026.106455","url":null,"abstract":"<div><div>Visual attention can be guided away from objects known to be irrelevant to the current task. These negative templates (specifying distractor features) can be maintained in visual working memory (VWM) and in long-term memory (LTM). LTM-based negative templates allow for direct suppression of to-be-avoided feature values, observable in the earliest selective operations during search (i.e., implemented proactively). However, there is mixed evidence regarding whether VWM-based negative templates are likewise implemented directly and proactively. Here, we contrasted LTM- and VWM-based negative guidance within the same visual search experiment. There were two broad lines of findings. First, the two sources of guidance dissociated on several measures of oculomotor orienting during visual search, including: a) the polarity of initial guidance, b) the latency of initial orienting, and c) the pattern of guidance across an extended search trial. We conclude that the two forms of guidance are implemented by fundamentally different mechanisms. Second, we created conditions in which the two forms of guidance were potentially operational within the same trial, testing their interaction. Both were expressed within a trial when they specified different sets of objects. However, VWM-based biases dominated when the two biases were placed in competition, indicating that online attentional sets tend to overshadow learned biases in the computation of priority.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106455"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computational models reveal intuitive physics and statistical cues separately contribute to the visual perception of liquids","authors":"Yuting Zhang , Wenyan Bi , Yuyang Miao , Ilker Yildirim","doi":"10.1016/j.cognition.2026.106469","DOIUrl":"10.1016/j.cognition.2026.106469","url":null,"abstract":"<div><div>We are intimately familiar with liquids in our visual experience, yet the computational basis of liquid perception remains underexplored. This is an important knowledge gap because liquids, with their mutable shapes and complex intrinsic dynamics, differ remarkably from the commonly studied categories in computational vision, such as rigid objects or non-rigid solids. To understand the computational basis of liquid perception, we implemented different models of this ability and tested them in a new behavioral study. The models realize two distinct theoretical possibilities for the visual perception of liquid viscosity. The first possibility, and the focus of most existing work, explains the representation of liquid viscosity as a consequence of high-level image and motion statistics discriminative of the gradations of this physical property. A second, much different possibility is that the perceptual representations of liquids functionally map the physical processes of how viscosity and external forces (e.g., gravity, rigid surfaces) shape the way liquids move. We task these models and humans in a new behavioral task: making similarity judgments of liquid viscosity across pairs of animations depicting qualitatively different scenarios — e.g., a metal ball falling into a liquid container vs. liquid pouring over a non-flat surface. We find that a new model, <em>Ripple</em>, which builds and manipulates physics-based representations of liquid viscosity from sensory inputs, explains substantial variance in human judgments beyond powerful, previously behaviorally validated, statistical representations of viscosity. Moreover, statistical representations of viscosity across vastly different model architectures — a task-specific DNN and a general video foundation model — converge with one another, while remaining equally differentiated from Ripple. These results suggest that liquid perception extends beyond image statistics to also involve simulation-based intuitive physics.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106469"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146137920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2026-06-01Epub Date: 2026-02-09DOI: 10.1016/j.cognition.2026.106481
Weizhen Xie , Yoojeong Choo , Weiwei Zhang
{"title":"Mental cost of simple(st) physical exertion","authors":"Weizhen Xie , Yoojeong Choo , Weiwei Zhang","doi":"10.1016/j.cognition.2026.106481","DOIUrl":"10.1016/j.cognition.2026.106481","url":null,"abstract":"<div><div>Simple physical actions, such as hand gripping, impose measurable mental costs, impairing attention, memory, and decision-making. However, the mechanisms underlying this action-cognition trade-off remain elusive. A resource-sharing account posits that action and cognition draw resources from a common pool; thus, engaging the muscular system may reduce one's ability to actively retain information in mind, resulting in a working memory (WM) retention cost. In contrast, a control-cost account suggests that physical exertion primarily increases demands on control-related processes such as distractor inhibition without reducing overall WM retention. We tested these different accounts across two experiments, both of which consistently showed that concurrent physical load impaired visual WM performance at the behavioral level, especially in the presence of task-irrelevant distractors. In Experiment 1, EEG recordings revealed that stronger concurrent handgrip force did not reduce the contralateral delay activity (CDA), a neural marker of WM retention. Instead, higher physical load increased CDA amplitude when more distractors were present, consistent with increased retention of task-irrelevant information during concurrent physical exertion. In Experiment 2, fMRI revealed that this interaction was preferentially expressed within a frontoparietal network, encompassing the bilateral inferior frontal and posterior parietal cortices, rather than sensory and motor cortices associated with visual input and physical action. Together, these findings indicate that the cognitive cost of physical exertion arises not from an overall reduction in WM retention, but from increased demands on control-related processes that regulate which information gains access to memory, leading to greater inclusion of task-irrelevant content under elevated physical load.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106481"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146158834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2026-06-01Epub Date: 2026-01-05DOI: 10.1016/j.cognition.2025.106418
Richard Vagnino , Caren Walker
{"title":"Schema drift: Relational concepts and conceptual change","authors":"Richard Vagnino , Caren Walker","doi":"10.1016/j.cognition.2025.106418","DOIUrl":"10.1016/j.cognition.2025.106418","url":null,"abstract":"<div><div>Analogical reasoning is one of the most common ways individuals bring previous experience to bear on unfamiliar situations. Most theories describe this process as a structured comparison that involves mapping the relational properties between a familiar source and unfamiliar target. This both allows the transfer of useful inferences from the source to the target and highlights the common structure shared by both analogs, represented by an abstract schema. This schema can help with identifying and reasoning about structurally similar situations in the future. While researchers have studied how representations of source and target analogs undergo alterations as a result of this mapping process, little attention has been paid to how the abstract schemas thought to guide future analogical reasoning might similarly change with use. We explore this question in three experiments and present evidence that suggests abstract schemas do indeed drift under certain conditions.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106418"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145895832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2026-06-01Epub Date: 2026-01-28DOI: 10.1016/j.cognition.2026.106454
Iris Wiegand , Igor S. Utochkin , Ava Mitra , Chia-Chien Wu , Jeremy M. Wolfe
{"title":"A common signal-strength factor limits awareness and precise knowledge of multiple moving objects across the adult lifespan","authors":"Iris Wiegand , Igor S. Utochkin , Ava Mitra , Chia-Chien Wu , Jeremy M. Wolfe","doi":"10.1016/j.cognition.2026.106454","DOIUrl":"10.1016/j.cognition.2026.106454","url":null,"abstract":"<div><div>This study investigated age differences in the precise knowledge and imprecise knowledge, or awareness, of multiple moving visual objects, measured by Multiple Identity Tracking (MIT) and Multiple Object Awareness (MOA) capacities, respectively, in a multiple object tracking task. Experiment 1 demonstrated significant decline of both capacities in older observers (65–80 years) compared to younger observers (18–44 years). Experiment 2 showed that age-related declines in MIT and MOA were linear across the adult lifespan (18–76 years).</div><div>Additionally, we used computational models to test whether age effects could be explained by one common signal-strength factor (<em>d'</em>) or by a dual-process model with an additional recollection parameter (R). Our results indicate that a detailed, recollection-based object-location representation (R) only plays a small role in tracking many objects and this factor does not vary with observers' age. For most observers, a single signal-strength parameter (d) explained behaviour best, and this parameter significantly declined with observers' age. This suggests that reduced sensitivity likely impairs older adults' ability to discriminate and clearly represent visual objects, resulting in both lower MIT and MOA capacities.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106454"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2026-06-01Epub Date: 2026-01-27DOI: 10.1016/j.cognition.2026.106452
M. Houbben , G. Vannuscorps
{"title":"The computational dynamics of shape orientation perception","authors":"M. Houbben , G. Vannuscorps","doi":"10.1016/j.cognition.2026.106452","DOIUrl":"10.1016/j.cognition.2026.106452","url":null,"abstract":"<div><div>How does the brain transform retinal information into representations of oriented objects? The most comprehensive computational explanation to date – the coordinate-system hypothesis of orientation representation - proposes that this transformation relies on the computation of four parameters that jointly define the relationship between a shape and its environment: axis correspondence, polarity correspondence, tilt direction, and tilt magnitude. The goal of this research was to investigate whether these parameters are computed in parallel or serially and, if so, in which order. To do so, we conducted three same/different experiments in which targets and probes could differ by either one of two parameters (A and B) or both (A + B). Under the assumption that response times in such tasks reflect the rate at which evidence for a difference is accumulated, the conjunction condition (A + B) should result in faster response times if the two parameters (A and B) are processed in parallel. In contrast, if the two parameters are processed serially, response times for A + B should be equivalent to those for the first parameter (e.g., A) and faster than those for the second parameter (B). In this framework, the results of the three experiments suggest that axis correspondence is computed first, followed by all the other parameters, computed in parallel.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106452"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2026-06-01Epub Date: 2026-01-09DOI: 10.1016/j.cognition.2025.106328
Branden J. Bio , Sangeet Khemlani
{"title":"Naïve epistemics: A theory of rational and error-prone mental state reasoning","authors":"Branden J. Bio , Sangeet Khemlani","doi":"10.1016/j.cognition.2025.106328","DOIUrl":"10.1016/j.cognition.2025.106328","url":null,"abstract":"<div><div>Effective communication depends on reasoning about what others know and believe, and failures in executive functioning can disrupt the way adults reason about mental states. Studies reveal that failures in interpreting premises, simulating possibilities, and formulating conclusions can all yield systematic errors in reasoning – but no account exists of the specific sorts of error people produce when these failures occur in the context of mental state reasoning. We developed such a theory to account for both rational and error-prone mental state reasoning. The theory makes three proposals: first, people build representations of possibilities, and tag those representations, to distinguish knowledge from belief; second, they update, inspect, and consolidate representations of possibilities to engage in mental state reasoning; and third, they can integrate semantic contents into their representations of belief states by constructing or else blocking the construction of alternative possibilities. We tested the theory by examining the patterns of conclusions reasoners produced using a novel sentence construction interface or else through free response. These generative tasks permitted analyses of participants' tendency to draw sensible epistemic conclusions as well as their systematic errors, and they corroborate the central tenets of the theory.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106328"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2026-06-01Epub Date: 2026-01-16DOI: 10.1016/j.cognition.2026.106445
Margaret Kandel , Nan Li , Jesse Snedeker
{"title":"Evidence for top-down constraints and form-based prediction in 4–5 year-olds' lexical processing","authors":"Margaret Kandel , Nan Li , Jesse Snedeker","doi":"10.1016/j.cognition.2026.106445","DOIUrl":"10.1016/j.cognition.2026.106445","url":null,"abstract":"<div><div>Interactive processing is a central feature of human cognition, whereby top-down and bottom-up pathways pass information between different levels of representation. In this study, we investigated how these interactive mechanisms develop by asking whether interactive processing arises early in life or emerges later, with experience or as the brain matures. In a visual world eye-tracking study, we tested whether four and five year-old children show evidence of top-down interactivity during language comprehension. We found that young children, like adults, can use top-down cues from the sentence context to constrain processing of the bottom-up language input during spoken word recognition, allowing them to avoid activating word candidates that initially match the input but are semantically incongruent with the context. Furthermore, we found that the children used top-down cues to pre-activate the phonological representations of predictable words before they appeared in the input. These findings illustrate that the pathways necessary for interactive processing are robust and active by early childhood, suggesting that the mechanisms of interactive processing are intrinsic and fundamental properties of the mind's architecture.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106445"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}