Margherita Bianchi, Silvia Guerra, Bianca Bonato, Sara Avesani, Laura Ravazzolo, Valentina Simonetti, Marco Dadda, Umberto Castiello
{"title":"Temporal Dynamics With and Without a Nervous System: Plant Physiology, Communication, and Movement","authors":"Margherita Bianchi, Silvia Guerra, Bianca Bonato, Sara Avesani, Laura Ravazzolo, Valentina Simonetti, Marco Dadda, Umberto Castiello","doi":"10.1111/cogs.70079","DOIUrl":"https://doi.org/10.1111/cogs.70079","url":null,"abstract":"<p>The concept of time has long been the subject of complex philosophical reflections and scientific research, which have interpreted it differently based on the starting question, context, and level of analysis of the system under investigation. In the present review, we first explore how time has been studied among different scientific fields such as physics, neuroscience, and bioecological sciences. We emphasize the fundamental role of an organism's ability to perceive the passage of time and dynamically adapt to its environment for survival. Growth, reproduction, and communication processes are subject to spatiotemporal variability, and the sense of time allows organisms to structure their interactions, track past, and anticipate future events. Specifically, building on a relational and multilevel approach, this paper proposes an analysis of various aspects of the temporal dimension of plants—ranging from their growth and adaptation rates to behavioral strategies and modes of communication—culminating in a focused examination of research based on the kinematical analysis of plant movement. By adopting a comparative and critical approach, we raise several questions about the temporality of processes from different perspectives. Further insights into the timing of physiological and communication processes in plants will help to recognize the central role of temporality in life and to discover mechanisms, processes, and behavioral strategies that may be common (or similar) across species or unique (species-specific) for some organisms, both with and without nervous systems.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70079","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144339323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Madeline H. Pelgrim, Shreyas Sundara Raman, Thomas Serre, Daphna Buchsbaum
{"title":"Evaluating Dogs’ Real-World Visual Environment and Attention","authors":"Madeline H. Pelgrim, Shreyas Sundara Raman, Thomas Serre, Daphna Buchsbaum","doi":"10.1111/cogs.70080","DOIUrl":"https://doi.org/10.1111/cogs.70080","url":null,"abstract":"<p>Dogs have a unique evolutionary relationship with humans, yet little is known about the visual information available to them or how they direct their visual attention within their environment. The present study, inspired by comparable work in infants, classified the items available to be gazed at by dogs during a common daily event, a walk. We then explored the statistics over the availability of those categories and over the dogs’ visual attention. Using a head-mounted eye-tracking apparatus that was custom-designed for dogs, 11 dogs walked on a predetermined route outdoors under naturalistic conditions generating a total of 11,698 gazes for analysis. Image stills from these fixations were analyzed using computer vision techniques to explore the items present, the space within the visual field those items occupied, and which of the items the dog was gazing at. On average, dogs looked proportionally most at buses, plants, people, the pavement, and construction equipment; however, there were significant individual differences. The results of this project provide a foundational step toward understanding how dogs look at and interact with their physical world, opening up avenues for future research into how they learn and make decisions, both independently and with a human social partner.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144339322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-Situational Statistics Present in an Early Language Learning Context: Evidence From Naturalistic Parent–Child Interactions","authors":"Ellis S. Cain, Rachel A. Ryskin, Chen Yu","doi":"10.1111/cogs.70078","DOIUrl":"https://doi.org/10.1111/cogs.70078","url":null,"abstract":"<p>According to the cross-situational learning account, infants aggregate statistical information from multiple parent naming events to resolve ambiguous word-referent mappings within individual naming events. While previous experimental studies have shown that infant and adult learners can build correct mappings based on statistical regularities encoded in multiple learning situations in an experiment, other studies that use more naturalistic stimuli (e.g., real-world video) reveal poor performance in adults' ability to infer the correct referent. Based on those results derived from more naturalistic stimuli, the cross-situational learning solution cannot be useful to solve the mapping problem in the real world because cross-situational statistics from the real world are much more ambiguous than those created in experimental studies. To examine the feasibility of cross-situational learning in everyday contexts, the present study aims to quantify visual-audio statistics from one of everyday activities—parent–child toy play. We analyze parent naming events in a video corpus of infant-perspective scenes during parent–child toy play in a naturalistic lab setting, where we found three distinct properties that characterize statistical regularities perceived by young learners: (1) there are a limited number of visual scene compositions perceived by young learners at the moments when they hear object names; (2) the frequencies of parent naming events are distributed in a skewed, Zipfian fashion; and (3) cross-situational statistics in naturalistic toy play are comparable to those used in laboratory experiments. Our results underscore the importance of quantifying the statistical regularities in the input from the learner's perspective in order to shed light on the mechanisms supporting early word learning.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Influences of Role, Action Contribution, and Outcome Feedback on Individual and Joint Sense of Agency","authors":"Hongyuan Guo, Lihong Li, Lingyun Wang, Gaojie Yun, Fanli Jia","doi":"10.1111/cogs.70068","DOIUrl":"https://doi.org/10.1111/cogs.70068","url":null,"abstract":"<p>Individuals can experience both “I” based individual agency and “we” based joint agency during cooperative action. This study examined how three key factors, role identity (leader, follower), action contribution (high, equal, low), and outcome feedback (success, failure, none), influence these two forms of agency. Through three experiments using goal-directed joint tasks and subjective agency ratings, we systematically explored their main and interactive effects. In Experiment 1, without feedback, individual agency increased with greater action contributions and was stronger for leaders than for followers, while joint agency remained stable. Experiment 2 confirmed these effects, showing independent contributions of role and effort to individual agency but minimal effects on joint agency. Experiment 3 introduced outcome feedback, revealing that success amplified individual agency overall, while joint agency was shaped by complex interactions. Specifically, followers with high contributions reported stronger joint agency after success, whereas leaders with high contributions reported stronger joint agency after failure. These findings suggest that while individual agency is closely linked to leadership and effort, joint agency reflects a more dynamic integration of social roles, effort distribution, and outcome evaluation. The study highlights the importance of considering both conceptual (role-based) and sensorimotor (effort-based) cues in understanding agency. It also reveals how outcome feedback and attribution processes, such as self-serving bias, modulate perceptions of control and responsibility in cooperative contexts.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144256210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unlearning Incorrect Associations in Word Learning: Evidence From Eye-Tracking","authors":"Tanja C. Roembke, Bob McMurray","doi":"10.1111/cogs.70077","DOIUrl":"https://doi.org/10.1111/cogs.70077","url":null,"abstract":"<p>Computational and animal models suggest that the unlearning or pruning of incorrect meanings matters for word learning. However, it is currently unclear how such pruning occurs during word learning and to what extent it depends on supervised and unsupervised learning. In two experiments (<i>N</i><sub>1</sub> = 40; <i>N</i><sub>2</sub> = 42), adult participants first completed a pretraining, in which each word was paired with two objects across trials: its target and another object (termed secondary target [T2]). Subsequently, participants learned the correct word-object-mappings in a supervised training paradigm and were then tested on the word meanings. During training, trials were structured such that some T2s never occurred with the targets, while others did, allowing us to disentangle the contributions of supervised and unsupervised pruning accounts. Eye movements were tracked during training and testing to measure the activation strength of alternative meanings. The experiments were identical but differed in how often the word was paired with the T2 during pretraining. We found that while weak incorrect associations were pruned quickly (Experiment 1), stronger ones remained present even after ceiling performance (Experiment 2), suggesting that the extent to which incorrect associations are unlearned depends on the strength of the initial mappings. Additionally, pruning was observed even for T2s that did not co-occur with their corresponding word during training in line with unsupervised pruning. Overall, these findings imply that subtle incorrect associations may remain in the lexicon and contribute to other language processes (e.g., word recognition) even after word learning is completed.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70077","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144256209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adults Represent Others’ Logical Inferences Even When It Is Unnecessary","authors":"Dóra Fogd, Ernő Téglás, Ágnes Melinda Kovács","doi":"10.1111/cogs.70076","DOIUrl":"https://doi.org/10.1111/cogs.70076","url":null,"abstract":"<p>Successful social interactions require representing not only what others know, but also what they may deductively infer from evidence. For instance, to help deciding between two alternatives, we may just reveal the incorrect option, expecting others to draw the correct conclusion. Seemingly, we readily track others’ logical inferences if it is necessary for our goals. However, it is currently unknown whether we also track them when we do not have to, and whether these inferences affect our own conclusions. To address this, in four online experiments, we presented adults with scenarios where an agent could arrive at the same or different conclusions as the participant, based on what she witnessed (via excluding one or two out of three target locations). Participants rated the likelihood of an outcome from self or from the agent's perspective. We hypothesized that if participants track others’ inferences also when making self-perspective judgments, that is, when they could respond without even paying attention to the other, the spontaneous representation of the other's different conclusion may result in higher ratings for the outcome the agent (but not the participant) considers possible, compared to the one both consider impossible. In three experiments, we found such an altercentric bias in self-perspective judgments, suggesting that participants spontaneously encoded the conclusions the agent could draw (Experiments 1 and 2), even when this required multistep inferences (Experiment 4), although there were considerable individual differences and the bias was absent when task-demands were high (Experiment 3), implying a potentially resource-dependent use of the capacity.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144244872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shiri Lev-Ari, Rose Stamp, Connie de Vos, Uiko Yano, Victoria Nyst, Karen Emmorey
{"title":"The Relationship Between Community Size and Iconicity in Sign Languages","authors":"Shiri Lev-Ari, Rose Stamp, Connie de Vos, Uiko Yano, Victoria Nyst, Karen Emmorey","doi":"10.1111/cogs.70074","DOIUrl":"https://doi.org/10.1111/cogs.70074","url":null,"abstract":"<p>Communication is harder in larger communities. Past research shows that this leads larger communities to create languages that are easier to learn and use. In particular, previous research suggests that spoken languages that are used by larger communities are more sound symbolic than spoken languages used by smaller communities, presumably, because sound symbolism facilitates language acquisition and use. This study tests whether the same principle extends to sign languages as the role of iconicity in the acquisition and use of sign languages is debated. Furthermore, sign languages are more iconic than spoken languages and are argued to lose their iconicity over time. Therefore, they might not show the same pattern. The paper also tests whether iconicity depends on semantic domain. Participants from five different countries guessed the meaning and rated the iconicity of signs from 11 different sign languages: five languages with >500,000 signers and six languages with <3000 signers. Half of the signs referred to social concepts (e.g., friend, shame) and half referred to nonsocial concepts (e.g., garlic, morning). Nonsocial signs from large sign languages were rated as more iconic than nonsocial signs from small sign languages with no difference between the languages for social signs. Results also suggest that rated iconicity and guessing accuracy are more aligned in signs from large sign languages, potentially because smaller sign languages are more likely to rely on culture-specific iconicity that is not as easily guessed outside of context. Together, this study shows how community size can influence lexical form and how the effect of such social pressures might depend on semantic domain.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Leveraging Context for Perceptual Prediction Using Word Embeddings","authors":"Georgia-Ann Carter, Frank Keller, Paul Hoffman","doi":"10.1111/cogs.70072","DOIUrl":"https://doi.org/10.1111/cogs.70072","url":null,"abstract":"<p>Word embeddings derived from large language corpora have been successfully used in cognitive science and artificial intelligence to represent linguistic meaning. However, there is continued debate as to how well they encode useful information about the perceptual qualities of concepts. This debate is critical to identifying the scope of embodiment in human semantics. If perceptual object properties can be inferred from word embeddings derived from language alone, this suggests that language provides a useful adjunct to direct perceptual experience for acquiring this kind of conceptual knowledge. Previous research has shown mixed performance when embeddings are used to predict perceptual qualities. Here, we tested if we could improve performance by leveraging the ability of Transformer-based language models to represent word meaning in context. To this end, we conducted two experiments. Our first experiment investigated noun representations. We generated decontextualized (“charcoal”) and contextualized (“the brightness of charcoal”) Word2Vec and BERT embeddings for a large set of concepts and compared their ability to predict human ratings of the concepts’ brightness. We repeated this procedure to also probe for the shape of those concepts. In general, we found very good prediction performance for shape, and a more modest performance for brightness. The addition of context did not improve perceptual prediction performance. In Experiment 2, we investigated representations of adjective–noun phrases. Perceptual prediction performance was generally found to be good, with the nonadditive nature of adjective brightness reflected in the word embeddings. We also found that the addition of context had a limited impact on how well perceptual features could be predicted. We frame these results against current work on the interpretability of language models and debates surrounding embodiment in human conceptual processing.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70072","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Why Do Children From Age 4 Fail True Belief Tasks? A Decision Experiment Testing Competence Versus Performance Limitation Accounts","authors":"Lydia Paulin Schidelko, Hannes Rakoczy","doi":"10.1111/cogs.70069","DOIUrl":"https://doi.org/10.1111/cogs.70069","url":null,"abstract":"<p>The standard view on Theory of Mind (ToM) is that the mastery of the false belief (FB) task around age 4 marks the ontogenetic emergence of full-fledged meta-representational ToM. Recently, a puzzling finding has emerged: Once children master the FB task, they begin to fail true belief (TB) control tasks. This finding threatens the validity of FB tasks and the standard view. Here, we test two prominent attempts to explain the puzzling findings against each other. The perceptual access reasoning account (a competence limitation account) assumes that children at age 4 do not yet engage in meta-representation, but use simpler heuristics (“if an agent has perceptual access, she knows and then acts successfully; otherwise, she acts unsuccessfully”). In contrast, the pragmatics approach (a performance limitation account) suggests that children at age 4 do have meta-representational ToM but are confused by pragmatic task factors of the TB task. The current study tested competing predictions of both accounts in a decision experiment. Results from 165 4- to 7-year-olds reveal that failure in the TB task disappeared once the tasks were modified: children mastered both FB and TB tasks when the latter were adapted in terms of heuristic and pragmatic factors. Importantly, this pattern held in conditions in which the pragmatics account predicts success, but the perceptual access account predicts failure. Overall, the present findings thus corroborate the standard view (children use meta-representational ToM from age 4, at the latest) and suggest that difficulties with TB tasks merely reflect pragmatic performance factors.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Clockwise Bias in 2D Rotation: A Cognitive Rather Than Sensory Phenomenon","authors":"Rong Jiang, Ruo-Si Li, Ming Meng","doi":"10.1111/cogs.70075","DOIUrl":"https://doi.org/10.1111/cogs.70075","url":null,"abstract":"<p>Previous studies have documented a preference for clockwise (CW) over counterclockwise (CCW) rotation in various visual tasks, but the underlying mechanisms remain unclear. To determine at what stage the CW bias emerges, we tested this preference across multiple visual awareness paradigms using consistent 2D rotation stimuli. In Experiment 1, we found a strong CW bias in apparent motion, with CW dominating perception 1.6 times longer than CCW during long-term presentations and eliciting an average of 74% CW percepts in short-term presentations. By contrast, no CW bias was observed in binocular rivalry, suggesting its absence in low-level perceptual conflict. Experiment 2 employed the breaking continuous flash suppression (b-CFS) paradigm to assess unconscious preferences, revealing no difference in breakthrough times between CW and CCW rotations. Specifically, although apparent motion stimuli showed a higher frequency of CW percepts, breakthrough times for stimuli reported as CW and CCW were similar, indicating that the CW bias occurs after stimuli reach awareness. In Experiment 3, we manipulated the ambiguity of apparent motion stimuli and found significant interactions between the CW bias and input ambiguity, further ruling out a fixed sensory or response bias. These findings suggest that the CW bias in 2D rotation may be driven by higher-level cognitive processes, offering insights into how the visual system resolves perceptual ambiguity.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}