Mieke Sarah Slim , Peter Lauwers , Robert J. Hartsuiker
{"title":"重新审视语言中的逻辑:毕竟,每个全称量词的范围都是相似的","authors":"Mieke Sarah Slim , Peter Lauwers , Robert J. Hartsuiker","doi":"10.1016/j.jml.2025.104661","DOIUrl":null,"url":null,"abstract":"<div><div>A doubly-quantified sentence like <em>Every bear approached a tent</em> is ambiguous: Did every bear approach a different tent, or did they approach the same tent? These two interpretations are assumed to be mentally represented as logical representations, which specify how the different quantifiers are assigned scope with respect to each other. Based on a structural priming study, <span><span>Feiman and Snedeker (2016)</span></span> argued that logical representations capture quantifier-specific combinatorial properties (e.g., the specification of <em>every</em> differs from the specification of <em>each</em> in logical representations). We re-examined this conclusion by testing logical representation priming in Dutch. Across four experiments, we observed that priming of logical representations emerged if the same quantifiers are repeated in prime and target, but also if the prime and target contained different quantifiers. However, logical representation priming between quantifiers emerged less consistently than priming within the same quantifier. More specifically, our results suggest that priming between quantifiers emerges more robustly if the participant is presented with quantifier variation in the prime trials. When priming between quantifiers emerged, however, its strength was comparable to priming within the same quantifier. Therefore, we conclude that logical representations do not specify quantifier-specific biases in the assignment of scope.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"144 ","pages":"Article 104661"},"PeriodicalIF":2.9000,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Revisiting the logic in language: The scope of each and every universal quantifier is alike after all\",\"authors\":\"Mieke Sarah Slim , Peter Lauwers , Robert J. Hartsuiker\",\"doi\":\"10.1016/j.jml.2025.104661\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>A doubly-quantified sentence like <em>Every bear approached a tent</em> is ambiguous: Did every bear approach a different tent, or did they approach the same tent? These two interpretations are assumed to be mentally represented as logical representations, which specify how the different quantifiers are assigned scope with respect to each other. Based on a structural priming study, <span><span>Feiman and Snedeker (2016)</span></span> argued that logical representations capture quantifier-specific combinatorial properties (e.g., the specification of <em>every</em> differs from the specification of <em>each</em> in logical representations). We re-examined this conclusion by testing logical representation priming in Dutch. Across four experiments, we observed that priming of logical representations emerged if the same quantifiers are repeated in prime and target, but also if the prime and target contained different quantifiers. However, logical representation priming between quantifiers emerged less consistently than priming within the same quantifier. More specifically, our results suggest that priming between quantifiers emerges more robustly if the participant is presented with quantifier variation in the prime trials. When priming between quantifiers emerged, however, its strength was comparable to priming within the same quantifier. Therefore, we conclude that logical representations do not specify quantifier-specific biases in the assignment of scope.</div></div>\",\"PeriodicalId\":16493,\"journal\":{\"name\":\"Journal of memory and language\",\"volume\":\"144 \",\"pages\":\"Article 104661\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of memory and language\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0749596X25000543\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of memory and language","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0749596X25000543","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LINGUISTICS","Score":null,"Total":0}
Revisiting the logic in language: The scope of each and every universal quantifier is alike after all
A doubly-quantified sentence like Every bear approached a tent is ambiguous: Did every bear approach a different tent, or did they approach the same tent? These two interpretations are assumed to be mentally represented as logical representations, which specify how the different quantifiers are assigned scope with respect to each other. Based on a structural priming study, Feiman and Snedeker (2016) argued that logical representations capture quantifier-specific combinatorial properties (e.g., the specification of every differs from the specification of each in logical representations). We re-examined this conclusion by testing logical representation priming in Dutch. Across four experiments, we observed that priming of logical representations emerged if the same quantifiers are repeated in prime and target, but also if the prime and target contained different quantifiers. However, logical representation priming between quantifiers emerged less consistently than priming within the same quantifier. More specifically, our results suggest that priming between quantifiers emerges more robustly if the participant is presented with quantifier variation in the prime trials. When priming between quantifiers emerged, however, its strength was comparable to priming within the same quantifier. Therefore, we conclude that logical representations do not specify quantifier-specific biases in the assignment of scope.
期刊介绍:
Articles in the Journal of Memory and Language contribute to the formulation of scientific issues and theories in the areas of memory, language comprehension and production, and cognitive processes. Special emphasis is given to research articles that provide new theoretical insights based on a carefully laid empirical foundation. The journal generally favors articles that provide multiple experiments. In addition, significant theoretical papers without new experimental findings may be published.
The Journal of Memory and Language is a valuable tool for cognitive scientists, including psychologists, linguists, and others interested in memory and learning, language, reading, and speech.
Research Areas include:
• Topics that illuminate aspects of memory or language processing
• Linguistics
• Neuropsychology.