Maurício D. Martins, Zoe Bergmann, Elena Leonova, Roberta Bianco, Daniela Sammler, Arno Villringer
{"title":"Acquisition and Utilization of Recursive Rules in Motor Sequence Generation","authors":"Maurício D. Martins, Zoe Bergmann, Elena Leonova, Roberta Bianco, Daniela Sammler, Arno Villringer","doi":"10.1111/cogs.70108","DOIUrl":"https://doi.org/10.1111/cogs.70108","url":null,"abstract":"<p>Recursive hierarchical embedding allows humans to generate multiple hierarchical levels using simple rules. We can acquire recursion from exposure to linguistic and visual examples, but only develop the ability to understand “multiple-level” structures like “[[second] red] ball]” after mastering “same-level” conjunctions like “[second] <i>and</i> [red] ball.” Whether we can also learn recursion in motor production remains unexplored. Here, we tested 40 adults’ ability to learn and generate sequences of finger movements using “multiple-level” recursion and “same-level” iteration rules (like linguistic conjunction). Rule order was counterbalanced. First, they learned the generative rules (without explicit rule instructions or feedback) by executing examples of motor sequences based on visual cues displayed on the screen (learning). Second, participants were asked to discriminate between correct and incorrect motor sequences beyond those to which they were previously exposed (discrimination). Finally, they were asked to use the rules to generate new hierarchical levels consistent with the previously given (generation). We repeated the procedure (all three phases) on 2 days, allowing for a night of sleep. We found that most participants could discriminate correct/incorrect sequences based on recursive rules and use recursive rules to generate new hierarchical levels in motor sequences, but mostly on the second day of testing, and when they had acquired iterative before recursive rules. This aligns with previous literature on vision and language and with literature showing that sleep is necessary to generate abstract knowledge of motor sequences. Lastly, we found that the ability to discriminate well-formed motor sequences using recursion was insufficient for motor generativity.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 9","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70108","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144929461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Diachronic Investigation of the Change in Form and Formational-Semantic Systematicity of the Chinese Sign Language Lexicon","authors":"Yue Zou, Hao Lin","doi":"10.1111/cogs.70111","DOIUrl":"https://doi.org/10.1111/cogs.70111","url":null,"abstract":"<p>It has been argued in previous research that several competing pressures guide the directions of language evolution (economy vs. redundancy; arbitrariness vs. systematicity). For sign languages, however, the effects of competing pressures on their change of lexical systems remain largely unclear. In the present study, we focus on the diachronic change in form and formational-semantic systematicity of the Chinese Sign Language (CSL) lexicon. Drawing on two CSL lexicons (one from the 1960 dictionary and the other from the 2019 dictionary), we found that in the dimension of form, the CSL lexical system shows a trend toward monosyllabicity and symmetry. In terms of formational-semantic systematicity, we found that there is a significant correlation between form and meaning in both lexicons, but the effect of the arbitrariness constraint gets stronger over time. Our findings regarding the change in form indicate that the competing pressures between economy and redundancy have different effects on different parameters when shaping the lexical system of CSL. As for the correlation between form and meaning, our study provides insight as to how a balance between arbitrariness and systematicity is reached.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 9","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144927519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Judith Kalinowski, Laura Hansel, Michaela Vystrčilová, Alexander Ecker, Nivedita Mani
{"title":"The Development of Early Phonological Networks: An Analysis of Individual Longitudinal Vocabulary Growth","authors":"Judith Kalinowski, Laura Hansel, Michaela Vystrčilová, Alexander Ecker, Nivedita Mani","doi":"10.1111/cogs.70109","DOIUrl":"https://doi.org/10.1111/cogs.70109","url":null,"abstract":"<p>While much work has emphasized the role of the environment in language learning, research equally reports consistent effects of the child's knowledge, in particular, the words known to individual children, in steering further lexical development. Much of this work is based on cross-sectional data, assuming that the words typically known to children at <i>n</i> months predict the words typically known to children at <i>n</i>+x months. Given acknowledged variability in the number of words known to individual children at different ages, a more conclusive analysis of this issue requires examination of individual differences in the words learned by individual children across development, that is, using longitudinal data. In the current study, using longitudinal vocabulary data from children learning Norwegian, we ask whether the phonological connectivity of a word to words that the child already knows or words in the child's environment predicts the likelihood of the child learning that word across development. The results suggest that the early vocabulary grows predominantly in a rich-get-richer manner, where word learning is predicted by the connectivity of a word to already known words. However, word learning is, to a lesser extent, also influenced by the connectivity of a word to words in the child's linguistic environment. Our results highlight the promise of using longitudinal data to better understand the factors that influence vocabulary development.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 9","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70109","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144927517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can Large Language Models Simulate Spoken Human Conversations?","authors":"Eric Mayor, Lucas M. Bietti, Adrian Bangerter","doi":"10.1111/cogs.70106","DOIUrl":"https://doi.org/10.1111/cogs.70106","url":null,"abstract":"<p>Large language models (LLMs) can emulate many aspects of human cognition and have been heralded as a potential paradigm shift. They are proficient in chat-based conversation, but little is known about their ability to simulate spoken conversation. We investigated whether LLMs can simulate spoken human conversation. In Study 1, we compared transcripts of human telephone conversations from the Switchboard (SB) corpus to six corpora of transcripts generated by two powerful LLMs, GPT-4 and Claude Sonnet 3.5, and two open-source LLMs, Vicuna and Wayfarer, using different prompts designed to mimic SB participants’ instructions. We compared LLM and SB conversations in terms of alignment (conceptual, syntactic, and lexical), coordination markers, and coordination of openings and closings. We also documented qualitative features by which LLM conversations differ from SB conversations. In Study 2, we assessed whether humans can distinguish transcripts produced by LLMs from those of SB conversations. LLM conversations exhibited exaggerated alignment (and an increase in alignment as conversation unfolded) relative to human conversations, different and often inappropriate use of coordination markers, and were dissimilar to human conversations in openings and closings. LLM conversations did not consistently pass for SB conversations. Spoken conversations generated by LLMs are both qualitatively and quantitatively different from those of humans. This issue may evolve with better LLMs and more training on spoken conversation, but may also result from key differences between spoken conversation and chat.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 9","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70106","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144927520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Causal Perception(s)","authors":"Jonathan F. Kominsky, Katharina Wenig","doi":"10.1111/cogs.70107","DOIUrl":"https://doi.org/10.1111/cogs.70107","url":null,"abstract":"<p>In addition to detecting “low-level” features like shape, color, and movement, the human visual system perceives certain “higher-level” properties of the environment, like cause-and-effect interactions. The strongest evidence that we have true causal perception and not just inference comes from the phenomenon of retinotopically specific visual adaptation to launching, which shows that launching events have specialized processing at a point in the visual system that still uses the surface of the retina as its frame of reference. Using this paradigm, we show that the visual system adapts to two distinct causal features found in different types of interaction: a broad “launching-like” causality that is found in many billiard-ball-like collision events including “tool-effect” displays, “bursting,” and event “state change” events; and an “entraining” causality in events where one object contacts and then moves together with another. Notably, adaptation to entraining is not based on continuous motion alone, as the movement of a single object does not generate the adaptation effect. These results not only demonstrate the existence of multiple causal perceptions, but also begin to characterize the precise features that define these different causal event categories in perceptual processing.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 9","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144918855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Syntactic Complexity Phenomena Are Better Explained Without Empty Elements Mediating Long-Distance Dependencies","authors":"Yanis da Cunha, Edward Gibson","doi":"10.1111/cogs.70088","DOIUrl":"https://doi.org/10.1111/cogs.70088","url":null,"abstract":"<p>We report the results of two acceptability judgment experiments on English materials, which were designed in order to help disentangle predictions of syntactic theories with transformations from nontransformational theories. The materials in these experiments were motivated from examples from Pickering & Barry (1991), who provided intuitive evidence that there is little processing cost for connecting a fronted prepositional phrase to its verb, even if it is the second postverbal argument of a verb in the declarative form. For example, the PP <i>on which</i> connects to the verb <i>put</i> in the sentence <i>This is the saucer on which Mary put the cup into which she poured the milk</i>. If there is a transformation of phrases from declarative structures to interrogative structures (as proposed in Chomsky (1957) and all versions of related theories since), then there is a long-distance connection between the fronted PP and its base position following the NP object, for example, <i>the cup into which she poured the milk</i>, which is not complete until the end of the sentence. In contrast, in a theory without transformations, the PP can be directly associated with its role-assigning verb <i>put</i> when this verb is encountered. If there is cost for processing making dependency connections that is proportional to their distances, then transformational theories predict a large processing cost for this kind of structure, relative to controls. In contrast, nontransformational theories predict no large cost. The results of the two rating experiments consistently supported the predictions of the non-transformational theories relative to those of the transformational theories. We argue that, in line with other current evidence, the nontransformational theories appear to better support the available empirical data.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 8","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70088","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contrastive Verbal Guidance: A Beneficial Context for Attention To Events and Their Memory?","authors":"Amit Singh, Katharina J. Rohlfing","doi":"10.1111/cogs.70096","DOIUrl":"https://doi.org/10.1111/cogs.70096","url":null,"abstract":"<p>Research suggests that presenting an action via multimodal stimulation (verbal and visual) enhances its perception. To highlight this, in most studies, assertive instructions are generally presented before the occurrence of the visual subevent(s). However, verbal instructions need not always be assertive; they can also include negation to contrast the present event with a prior one, thereby facilitating processing—a phenomenon known as contextual facilitation. In our study, we investigated whether using negation to guide an action sequence facilitates action perception, particularly when two consecutive subactions contrast with each other. Stimuli from previous studies on action demonstration were used to create (non)contrastive actions, that is, a ball following noncontrastive and identical (Over–Over or Under–Under) versus contrastive and opposite paths (Over–Under or Under–Over) before terminating at a goal location. In Experiment 1, either an assertive or a negative instruction was provided as verbal guidance before onset of each path. Analyzing data from 35 participants, we found that, whereas assertive instructions facilitate overall action recall, negating the later path for contrastive actions is equally facilitative. Given that action goal is the most salient aspect in event memory due to goal-path bias in attention, a second experiment was conducted to test the effect of multimodal synchrony on goal attention and action memory. Experiment 2 revealed that when instructions overlap with actions, they become more tailored—assertive instructions effectively guide noncontrastive actions, while assertive–negative instruction particularly guides contrastive actions. Both studies suggest that increased attention to the goal leads to coarser perception of midevents, with action-instruction synchrony modulating goal bias in real-time event apprehension to serve distinct purposes for action conceptualization. Whereas presenting instructions before subactions attenuates goal attention, overlapping instructions increase goal attention and reveal the selective roles of assertive and negative instructions in guiding contrastive and noncontrastive actions.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 8","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144843555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learned Insignificance of Credibility Signs","authors":"Viktoria Kainz, Justin Sulik, Sonja Utz, Torsten Enßlin","doi":"10.1111/cogs.70102","DOIUrl":"https://doi.org/10.1111/cogs.70102","url":null,"abstract":"<p>A large part of how people learn about their shared world is via social information. However, in complex modern information ecosystems, it can be challenging to identify deception or filter out misinformation. This challenge is exacerbated by the existence of a dual-learning problem whereby: (1) people draw inferences about the world, given new social information; and simultaneously (2), they draw inferences about how credible various sources of information are, given social cues and previous knowledge. In this context, we investigate how social influence and individual cognitive processing interact to explain how one might lose the ability to reliably assess information. Crucially, we show how this happens even when individuals engage in rational belief updating and have access to objective cues of deception.</p><p>Using an agent-based model, the Reputation Game Simulation, we show that mere misinformation is not the problem: The dual-learning problem can be solved successfully with limited Bayesian reasoning, even in the presence of deceit. However, when certain agents consistently engage in fully deceptive behavior, intentionally distorting information to serve nonepistemic goals, this can lead nearby agents to unlearn or discount objective cues of credibility. This is an emergent delusion-like state, wherein false beliefs resist correction by true incoming information. Further, we show how such delusion-like states can be rehabilitated when agents who had previously lost the ability to discern cues of credibility are put into new, healthy—though not necessarily honest—environments.</p><p>Altogether, this suggests that correcting misinformation is not the optimal solution to epistemically toxic environments. Though difficult, socially induced cognitive biases can be repaired in healthy environments, ones where cues of credibility can be relearned in the absence of nonepistemic communication motives.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 8","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70102","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144832625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huang Ham, Bonan Zhao, Thomas L. Griffiths, Natalia Vélez
{"title":"Teaching Recombinable Motifs Through Simple Examples","authors":"Huang Ham, Bonan Zhao, Thomas L. Griffiths, Natalia Vélez","doi":"10.1111/cogs.70103","DOIUrl":"https://doi.org/10.1111/cogs.70103","url":null,"abstract":"<p>A hallmark of effective teaching is that it grants learners not just a collection of facts about the world, but also a toolkit of abstractions that can be applied to solve new problems. How do humans teach abstractions from examples? Here, we applied Bayesian models of pedagogy to a necklace-building task where teachers create necklaces to teach a learner “motifs” that can be flexibly recombined to create new necklaces. In Experiment 1 (<i>N</i> = 151), we find that human teachers produce necklaces that are simpler (i.e., have lower algorithmic complexity) than would be expected by chance, as indexed by a model that samples uniformly from all necklaces that contain the target motifs. This tendency to select simpler examples is captured by a pedagogical sampling model that tries to maximize the learner's belief in the true motifs by prioritizing examples that have fewer alternative interpretations. In Experiment 2 (<i>N</i> = 295), we find that simplicity is beneficial. Human learners recover the underlying motifs better when teachers produce simpler sequences, as predicted by the pedagogical sampling model. However, humans learn best from human teachers rather than from model-generated examples, which suggests that human teachers have additional expectations about how learners will interpret examples that are not captured by standard models of teaching. Our work provides a principled framework to understand when and why teachers use simple examples to convey abstract knowledge.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 8","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144833167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Partial Word Meanings From Referentially Ambiguous Naming Events","authors":"Nina Schoener, Sara C. Johnson, Sumarga H. Suanda","doi":"10.1111/cogs.70104","DOIUrl":"https://doi.org/10.1111/cogs.70104","url":null,"abstract":"<p>Both classic thought experiments and recent empirical evidence suggest that children frequently encounter new words whose meanings are underdetermined by the extralinguistic contexts in which they occur. The role that these referentially ambiguous events play in children's word learning is central to ongoing debates in the field. Do children learn words from referentially ambiguous events via an incremental learning process? Or, do children learn words primarily from the rare referentially transparent events they experience? Across two experiments with adults as model word learners, the current work asks whether the answer to these questions depends in part on how word learning is assessed. Participants were asked to learn the meanings of novel words solely from their referentially ambiguous contexts. When learning was assessed by asking participants to identify the exact meanings of those novel words, participants struggled mightily. However, when learning was assessed by asking the same participants to identify which of two new contexts the novel word most likely occurred in, even those who failed the exact meaning assessment succeeded. These data suggest that although referentially ambiguous events may fall short in allowing learners to identify a word's exact meaning, they nevertheless lead learners into the right regions of semantic space. These findings are a reminder of the pervasiveness of partial word learning effects in vocabulary acquisition and highlight that the resolution to the debate over the role of referentially ambiguous events in learning may depend on how learning is defined.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 8","pages":""},"PeriodicalIF":2.4,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144832626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}