Maayan Keshev, Mandy Cartner, Aya Meltzer-Asscher, Brian Dillon
{"title":"A Working Memory Model of Sentence Processing as Binding Morphemes to Syntactic Positions.","authors":"Maayan Keshev, Mandy Cartner, Aya Meltzer-Asscher, Brian Dillon","doi":"10.1111/tops.12780","DOIUrl":"https://doi.org/10.1111/tops.12780","url":null,"abstract":"<p><p>As they process complex linguistic input, language comprehenders must maintain a mapping between lexical items (e.g., morphemes) and their syntactic position in the sentence. We propose a model of how these morpheme-position bindings are encoded, maintained, and reaccessed in working memory, based on working memory models such as \"serial-order-in-a-box\" and its SOB-Complex Span version. Like those models, our model of linguistic working memory derives a range of attested memory interference effects from the process of binding items to positions in working memory. We present simulation results capturing similarity-based interference as well as item distortion effects. Our model provides a unified account of these two major classes of interference effects in sentence processing, attributing both types of effects to an associative memory architecture underpinning linguistic computation.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Cultural Evolutionary Model for the Law of Abbreviation.","authors":"Olivier Morin, Alexey Koshevoy","doi":"10.1111/tops.12782","DOIUrl":"https://doi.org/10.1111/tops.12782","url":null,"abstract":"<p><p>Efficiency principles are increasingly called upon to study features of human language and communication. Zipf's law of abbreviation is widely seen as a classic instance of a linguistic pattern brought about by language users' search for efficient communication. The \"law\"-a recurrent correlation between the frequency of words and their brevity-is a near-universal principle of communication, having been found in all of the hundreds of human languages where it has been tested, and a few nonhuman communication systems as well. The standard explanation for the law of abbreviation derives from pressures for efficiency: speakers minimize their cumulative effort by using shorter words for frequent occurrences. This explanation, we argue here, fails to explain why long words exist at all. It also fails to explain why the law of abbreviation, despite being robust, is systematically weakened by many short and rare words. We propose an alternative account of the law of abbreviation, based on a simple cultural evolutionary model. Our model does not require any pressure for efficiency. Instead, it derives the law of abbreviation from a general pressure for brevity applying to all words regardless of their frequency. This model makes two accurate predictions that the standard model misses: the correlation between frequency and brevity is consistently weak, and it is characterized by heteroskedasticity, with many short and rare words. We argue on this basis that efficiency considerations are neither necessary nor sufficient to explain the law.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How Do Scientists Think? Contributions Toward a Cognitive Science of Science.","authors":"Nancy J Nersessian","doi":"10.1111/tops.12777","DOIUrl":"https://doi.org/10.1111/tops.12777","url":null,"abstract":"<p><p>Scientific thinking is one of the most creative expressions of human cognition. This paper discusses my research contributions to the cognitive science of science. I have advanced the position that data on the cognitive practices of scientists drawn from extensive research into archival records of historical science or collected in extended ethnographic studies of contemporary science can provide valuable insight into the nature of scientific cognition and its relation to cognition in ordinary contexts. I focus on contributions of my research on analogy, model-based reasoning, and conceptual change and on how scientists enhance their natural cognitive capacities by creating modeling environments that integrate cognitive, social, material, and cultural resources. I provide an outline of my trajectory from a physicist to a philosopher of science to a hybrid cognitive scientist in my quest to understand the nature of scientific thinking.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142807993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Lebiere, Peter Pirolli, Matthew Johnson, Michael Martin, Donald Morrison
{"title":"Cognitive Models for Machine Theory of Mind.","authors":"Christian Lebiere, Peter Pirolli, Matthew Johnson, Michael Martin, Donald Morrison","doi":"10.1111/tops.12773","DOIUrl":"https://doi.org/10.1111/tops.12773","url":null,"abstract":"<p><p>Some of the required characteristics for a true machine theory of mind (MToM) include the ability to (1) reproduce the full diversity of human thought and behavior, (2) develop a personalized model of an individual with very limited data, and (3) provide an explanation for behavioral predictions grounded in the cognitive processes of the individual. We propose that a certain class of cognitive models provide an approach that is well suited to meeting those requirements. Being grounded in a mechanistic framework like a cognitive architecture such as ACT-R naturally fulfills the third requirement by mapping behavior to cognitive mechanisms. Exploiting a modeling paradigm such as instance-based learning accounts for the first requirement by reflecting variations in individual experience into a diversity of behavior. Mechanisms such as knowledge tracing and model tracing allow a specific run of the cognitive model to be aligned with a given individual behavior trace, fulfilling the second requirement. We illustrate these principles with a cognitive model of decision-making in a search and rescue task in the Minecraft simulation environment. We demonstrate that cognitive models personalized to individual human players can provide the MToM capability to optimize artificial intelligence agents by diagnosing the underlying causes of observed human behavior, projecting the future effects of potential interventions, and managing the adaptive process of shaping human behavior. Examples of the inputs provided by such analytic cognitive agents include predictions of cognitive load, probability of error, estimates of player self-efficacy, and trust calibration. Finally, we discuss implications for future research and applications to collective human-machine intelligence.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distributional Semantics: Meaning Through Culture and Interaction.","authors":"Pablo Contreras Kallens, Morten H Christiansen","doi":"10.1111/tops.12771","DOIUrl":"https://doi.org/10.1111/tops.12771","url":null,"abstract":"<p><p>Mastering how to convey meanings using language is perhaps the main challenge facing any language learner. However, satisfactory accounts of how this is achieved, and even of what it is for a linguistic item to have meaning, are hard to come by. Nick Chater was one of the pioneers involved in the early development of one of the most successful methodologies within the cognitive science of language for discovering meaning: distributional semantics. In this article, we review this approach and discuss its successes and shortcomings in capturing semantic phenomena. In particular, we discuss what we dub the \"distributional paradox:\" how can models that do not implement essential dimensions of human semantic processing, such as sensorimotor grounding, capture so many meaning-related phenomena? We conclude by providing a preliminary answer, arguing that distributional models capture the statistical scaffolding of human language acquisition that allows for communication, which, in line with Nick Chater's more recent ideas, has been shaped by the features of human cognition on the timescale of cultural evolution.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Processing Fluency and Predictive Processing: How the Predictive Mind Becomes Aware of its Cognitive Limitations.","authors":"Philippe Servajean, Wanja Wiese","doi":"10.1111/tops.12776","DOIUrl":"https://doi.org/10.1111/tops.12776","url":null,"abstract":"<p><p>Predictive processing is an influential theoretical framework for understanding human and animal cognition. In the context of predictive processing, learning is often reduced to optimizing the parameters of a generative model with a predefined structure. This is known as Bayesian parameter learning. However, to provide a comprehensive account of learning, one must also explain how the brain learns the structure of its generative model. This second kind of learning is known as structure learning. Structure learning would involve true structural changes in generative models. The purpose of the current paper is to describe the processes involved upstream of these structural changes. To do this, we first highlight the remarkable compatibility between predictive processing and the processing fluency theory. More precisely, we argue that predictive processing is able to account for all the main theoretical constructs associated with the notion of processing fluency (i.e., the fluency heuristic, naïve theory, the discrepancy-attribution hypothesis, absolute fluency, expected fluency, and relative fluency). We then use this predictive processing account of processing fluency to show how the brain could infer whether it needs a structural change for learning the causal regularities at play in the environment. Finally, we speculate on how this inference might indirectly trigger structural changes when necessary.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Moral Association Graph: A Cognitive Model for Automated Moral Inference.","authors":"Aida Ramezani, Yang Xu","doi":"10.1111/tops.12774","DOIUrl":"https://doi.org/10.1111/tops.12774","url":null,"abstract":"<p><p>Automated moral inference is an emerging topic of critical importance in artificial intelligence. The contemporary approach typically relies on language models to infer moral relevance or moral properties of a concept. This approach demands complex parameterization and costly computation, and it tends to disconnect with existing psychological accounts of moralization. We present a simple cognitive model for moral inference, Moral Association Graph (MAG), inspired by psychological work on moralization. Our model builds on word association network for inferring moral relevance and draws on rich psychological data. We demonstrate that MAG performs competitively to state-of-the-art language models when evaluated against a comprehensive set of data for automated inference of moral norms and moral judgment of concepts, and in-context moral inference. We also show that our model yields interpretable outputs and is applicable to informing short-term moral change.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Metaphors and the Invention of Writing.","authors":"Ludovica Ottaviano, Kathryn Kelley, Mattia Cartolano, Silvia Ferrara","doi":"10.1111/tops.12768","DOIUrl":"https://doi.org/10.1111/tops.12768","url":null,"abstract":"<p><p>The foundation of ancient, invented writing systems lies in the predominant iconicity of their sign shapes. However, these shapes are often used not for their referential meaning but in a metaphorical way, whereby one entity stands for another. Metaphor, including its subcategories pars pro toto and metonymy, plays a crucial role in the formation of the earliest pristine invented scripts, yet this mechanism has been understudied from a cognitive, contextual, and comparative perspective. This article aims to address issues pertaining to the definition, development, and application of these mechanisms in the formation of the Mesopotamian, Egyptian, and Chinese scripts. We analyze the local cases of metaphor-in-action in primary inventions, focusing first on visual metaphors and, second, on the typical or idiosyncratic uses of metonyms.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142710827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Language Production and Prediction in a Parallel Activation Model.","authors":"Martin J Pickering, Kristof Strijkers","doi":"10.1111/tops.12775","DOIUrl":"https://doi.org/10.1111/tops.12775","url":null,"abstract":"<p><p>Standard models of lexical production assume that speakers access representations of meaning, grammar, and different aspects of sound in a roughly sequential manner (whether or not they admit cascading or interactivity). In contrast, we review evidence for a parallel activation model in which these representations are accessed in parallel. According to this account, word learning involves the binding of the meaning, grammar, and sound of a word into a single representation. This representation is then activated as a whole during production, and so all linguistic components are available simultaneously. We then note that language comprehension involves extensive use of prediction and argue that comprehenders use production mechanisms to determine (roughly) what they would say next if they were speaking. So far, theories of prediction-by-production have assumed sequential lexical production. We therefore reinterpret such evidence in terms of parallel lexical production.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142689182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}