Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics最新文献

筛选
英文 中文
Predicting scalar diversity with context-driven uncertainty over alternatives 预测具有上下文驱动的不确定性的标量多样性
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics Pub Date : 1900-01-01 DOI: 10.18653/v1/2022.cmcl-1.8
Jennifer Hu, R. Levy, Sebastian Schuster
{"title":"Predicting scalar diversity with context-driven uncertainty over alternatives","authors":"Jennifer Hu, R. Levy, Sebastian Schuster","doi":"10.18653/v1/2022.cmcl-1.8","DOIUrl":"https://doi.org/10.18653/v1/2022.cmcl-1.8","url":null,"abstract":"Scalar implicature (SI) arises when a speaker uses an expression (e.g., “some”) that is semantically compatible with a logically stronger alternative on the same scale (e.g., “all”), leading the listener to infer that they did not intend to convey the stronger meaning. Prior work has demonstrated that SI rates are highly variable across scales, raising the question of what factors determine the SI strength for a particular scale. Here, we test the hypothesis that SI rates depend on the listener’s confidence in the underlying scale, which we operationalize as uncertainty over the distribution of possible alternatives conditioned on the context. We use a T5 model fine-tuned on a text infilling task to estimate this distribution. We find that scale uncertainty predicts human SI rates, measured as entropy over the sampled alternatives and over latent classes among alternatives in sentence embedding space. Furthermore, we do not find a significant effect of the surprisal of the strong scalemate. Our results suggest that pragmatic inferences depend on listeners’ context-driven uncertainty over alternatives.","PeriodicalId":428409,"journal":{"name":"Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114503822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using Grounded Word Representations to Study Theories of Lexical Concepts 运用基础词表征研究词汇概念理论
Dylan Ebert, Ellie Pavlick
{"title":"Using Grounded Word Representations to Study Theories of Lexical Concepts","authors":"Dylan Ebert, Ellie Pavlick","doi":"10.18653/v1/W19-2918","DOIUrl":"https://doi.org/10.18653/v1/W19-2918","url":null,"abstract":"The fields of cognitive science and philosophy have proposed many different theories for how humans represent “concepts”. Multiple such theories are compatible with state-of-the-art NLP methods, and could in principle be operationalized using neural networks. We focus on two particularly prominent theories–Classical Theory and Prototype Theory–in the context of visually-grounded lexical representations. We compare when and how the behavior of models based on these theories differs in terms of categorization and entailment tasks. Our preliminary results suggest that Classical-based representations perform better for entailment and Prototype-based representations perform better for categorization. We discuss plans for additional experiments needed to confirm these initial observations.","PeriodicalId":428409,"journal":{"name":"Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130356598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Codenames as a Game of Co-occurrence Counting 作为共现计数游戏的代号
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics Pub Date : 1900-01-01 DOI: 10.18653/v1/2022.cmcl-1.5
Réka Cserháti, István S. Kolláth, András Kicsi, Gábor Berend
{"title":"Codenames as a Game of Co-occurrence Counting","authors":"Réka Cserháti, István S. Kolláth, András Kicsi, Gábor Berend","doi":"10.18653/v1/2022.cmcl-1.5","DOIUrl":"https://doi.org/10.18653/v1/2022.cmcl-1.5","url":null,"abstract":"Codenames is a popular board game, in which knowledge and cooperation between players play an important role. The task of a player playing as a spymaster is to find words (clues) that a teammate finds related to as many of some given words as possible, but not to other specified words. This is a hard challenge even with today’s advanced language technology methods.In our study, we create spymaster agents using four types of relatedness measures that require only a raw text corpus to produce. These include newly introduced ones based on co-occurrences, which outperform FastText cosine similarity on gold standard relatedness data. To generate clues in Codenames, we combine relatedness measures with four different scoring functions, for two languages, English and Hungarian. For testing, we collect decisions of human guesser players in an online game, and our configurations outperform previous agents among methods using raw corpora only.","PeriodicalId":428409,"journal":{"name":"Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130817015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Verb-Second Effect on Quantifier Scope Interpretation 动词-秒效应对量词范围解释的影响
A. Sayeed, Matthias Lindemann, Vera Demberg
{"title":"Verb-Second Effect on Quantifier Scope Interpretation","authors":"A. Sayeed, Matthias Lindemann, Vera Demberg","doi":"10.18653/v1/W19-2915","DOIUrl":"https://doi.org/10.18653/v1/W19-2915","url":null,"abstract":"Sentences like “Every child climbed a tree” have at least two interpretations depending on the precedence order of the universal quantifier and the indefinite. Previous experimental work explores the role that different mechanisms such as semantic reanalysis and world knowledge may have in enabling each interpretation. This paper discusses a web-based task that uses the verb-second characteristic of German main clauses to estimate the influence of word order variation over world knowledge.","PeriodicalId":428409,"journal":{"name":"Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116322673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信