J. Lang. Model.最新文献

筛选
英文 中文
On regular copying languages 关于常规复制语言
J. Lang. Model. Pub Date : 2023-07-21 DOI: 10.15398/jlm.v11i1.342
Yang Wang, Tim Hunter
{"title":"On regular copying languages","authors":"Yang Wang, Tim Hunter","doi":"10.15398/jlm.v11i1.342","DOIUrl":"https://doi.org/10.15398/jlm.v11i1.342","url":null,"abstract":"This paper proposes a formal model of regular languages enriched with unbounded copying. We augment finite-state machinery with the ability to recognize copied strings by adding an unbounded memory buffer with a restricted form of first-in-first-out storage. The newly introduced computational device, finite-state buffered machines (FS-BMs), characterizes the class of regular languages and languages de-rived from them through a primitive copying operation. We name this language class regular copying languages (RCLs). We prove a pumping lemma and examine the closure properties of this language class. As suggested by previous literature (Gazdar and Pullum 1985, p.278), regular copying languages should approach the correct characteriza-tion of natural language word sets.","PeriodicalId":403597,"journal":{"name":"J. Lang. Model.","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115945864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating syntactic proposals using minimalist grammars and minimum description length 使用最小的语法和最小的描述长度评估语法建议
J. Lang. Model. Pub Date : 2023-07-21 DOI: 10.15398/jlm.v11i1.334
Marina Ermolaeva
{"title":"Evaluating syntactic proposals using minimalist grammars and minimum description length","authors":"Marina Ermolaeva","doi":"10.15398/jlm.v11i1.334","DOIUrl":"https://doi.org/10.15398/jlm.v11i1.334","url":null,"abstract":"Many patterns found in natural language syntax have multiple pos-sible explanations or structural descriptions. Even within the cur-rently dominant Minimalist framework (Chomsky 1995, 2000), it is not uncommon to encounter multiple types of analyses for the same phenomenon proposed in the literature. A natural question, then, is whether one could evaluate and compare syntactic proposals from a quantitative point of view. In this paper, we show how an evaluation measure inspired by the minimum description length principle (Rissa-nen 1978) can be used to compare accounts of syntactic phenomena implemented as minimalist grammars (Stabler 1997), and how argu-ments for and against this kind of analysis translate into quantitative differences.","PeriodicalId":403597,"journal":{"name":"J. Lang. Model.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126420445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simplicity and learning to distinguish arguments from modifiers 简单和学会区分论点和修饰语
J. Lang. Model. Pub Date : 2023-04-12 DOI: 10.15398/jlm.v10i2.263
Leon Bergen, E. Gibson, T. O’Donnell
{"title":"Simplicity and learning to distinguish arguments from modifiers","authors":"Leon Bergen, E. Gibson, T. O’Donnell","doi":"10.15398/jlm.v10i2.263","DOIUrl":"https://doi.org/10.15398/jlm.v10i2.263","url":null,"abstract":"We present a learnability analysis of the argument-modifier distinction, asking whether there is information in the distribution of English constituents that could allow learners to identify which constituents are arguments and which are modifiers. We first develop a general description of some of the ways in which arguments and modifiers differ in distribution. We then identify two models from the literature that can capture these differences, which we call the argument-only model and the argument-modifier model. We employ these models using a common learning framework based on two simplicity biases which tradeoff against one another. The first bias favors a small lexicon with highly reusable lexical items, and the second, opposing, bias favors simple derivations of individual forms – those using small numbers of lexical items.\u0000Our first empirical study shows that the argument-modifier model is able to recover the argument-modifier status of many individual constituents when evaluated against a gold standard. This provides evidence in favor of our general account of the distributional differences between arguments and modifiers. It also suggests a kind of lower bound on the amount of information that a suitably equipped learner could use to identify which phrases are arguments or modifiers.\u0000We then present a series of analyses investigating how and why the argument-modifier model is able to recover the argument-modifier status of some constituents. In particular, we show that the argumentmodifier model is able to provide a simpler description of the input corpus than the argument-only model, both in terms of lexicon size, and in terms of the complexity of individual derivations. Intuitively, the argument-modifier model is able to do this because it is able to ignore spurious modifier structure when learning the lexicon. These analyses further support our general account of the differences between arguments and modifiers, as well as our simplicity-based approach to learning.","PeriodicalId":403597,"journal":{"name":"J. Lang. Model.","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129351886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Idiosyncratic frequency as a measure of derivation vs. inflection 特异频率作为派生与屈折的度量
J. Lang. Model. Pub Date : 2023-03-07 DOI: 10.15398/jlm.v10i2.301
Maria Copot, Timothee Mickus, Olivier Bonami
{"title":"Idiosyncratic frequency as a measure of derivation vs. inflection","authors":"Maria Copot, Timothee Mickus, Olivier Bonami","doi":"10.15398/jlm.v10i2.301","DOIUrl":"https://doi.org/10.15398/jlm.v10i2.301","url":null,"abstract":"There is ongoing discussion about how to conceptualize the nature of the distinction between inflection and derivation. A common approach relies on qualitative differences in the semantic relationship between inflectionally versus derivationally related words: inflection yields ways to discuss the same concept in different syntactic contexts, while derivation gives rise to words for related concepts. This differential can be expected to manifest in the predictability of word frequency between words that are related derivationally or inflectionally: predicting the token frequency of a word based on information about its base form or about related words should be easier when the two words are in an inflectional relationship, rather than a derivational one. We compare prediction error magnitude for statistical models of token frequency based on distributional and frequency information of inflectionally or derivationally related words in French. The results conform to expectations: it is easier to predict the frequency of a word from properties of an inflectionally related word than from those of a derivationally related word. Prediction error provides a quantitative, continuous method to explore differences between individual processes and differences yielded by employing different predicting information, which in turn can be used to draw conclusions about the nature and manifestation of the inflection–derivation distinction.","PeriodicalId":403597,"journal":{"name":"J. Lang. Model.","volume":"9 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123730859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Implementing Natural Language Inference for comparatives 实现比较级的自然语言推理
J. Lang. Model. Pub Date : 2023-01-05 DOI: 10.15398/jlm.v10i1.294
Izumi Haruta, K. Mineshima, D. Bekki
{"title":"Implementing Natural Language Inference for comparatives","authors":"Izumi Haruta, K. Mineshima, D. Bekki","doi":"10.15398/jlm.v10i1.294","DOIUrl":"https://doi.org/10.15398/jlm.v10i1.294","url":null,"abstract":"This paper presents a computational framework for Natural Language Inference (NLI) using logic-based semantic representations and theorem-proving. We focus on logical inferences with comparatives and other related constructions in English, which are known for their structural complexity and difficulty in performing efficient reasoning. Using the so-called A-not-A analysis of comparatives, we implement a fully automated system to map various comparative constructions to semantic representations in typed first-order logic via Combinatory Categorial Grammar parsers and to prove entailment relations via a theorem prover. We evaluate the system on a variety of NLI benchmarks that contain challenging inferences, in comparison with other recent logic-based systems and neural NLI models.","PeriodicalId":403597,"journal":{"name":"J. Lang. Model.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127084429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural heuristics for scaling constructional language processing 基于神经启发式的结构化语言处理
J. Lang. Model. Pub Date : 2022-12-28 DOI: 10.15398/jlm.v10i2.318
Paul Van Eecke, Jens Nevens, Katrien Beuls
{"title":"Neural heuristics for scaling constructional language processing","authors":"Paul Van Eecke, Jens Nevens, Katrien Beuls","doi":"10.15398/jlm.v10i2.318","DOIUrl":"https://doi.org/10.15398/jlm.v10i2.318","url":null,"abstract":"Constructionist approaches to language make use of form-meaning pairings, called constructions, to capture all linguistic knowledge that is necessary for comprehending and producing natural language expressions. Language processing consists then in combining the constructions of a grammar in such a way that they solve a given language comprehension or production problem. Finding such an adequate sequence of constructions constitutes a search problem that is combinatorial in nature and becomes intractable as grammars increase in size. In this paper, we introduce a neural methodology for learning heuristics that substantially optimise the search processes involved in constructional language processing. We validate the methodology in a case study for the CLEVR benchmark dataset. We show that our novel methodology outperforms state-of-the-art techniques in terms of size of the search space and time of computation, most markedly in the production direction. The results reported on in this paper have the potential to overcome the major efficiency obstacle that hinders current efforts in learning large-scale construction grammars, thereby contributing to the development of scalable constructional language processing systems.","PeriodicalId":403597,"journal":{"name":"J. Lang. Model.","volume":"398 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116399837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
20 years of the Grammar Matrix: cross-linguistic hypothesis testing of increasingly complex interactions 20年的语法矩阵:日益复杂的相互作用的跨语言假设检验
J. Lang. Model. Pub Date : 2022-10-20 DOI: 10.15398/jlm.v10i1.292
Olga Zamaraeva, T. Trimble, Kristen Howell, Michael Wayne Goodman, Antske Fokkens, Guy Edward Toh Emerson, Christian M. Curtis, Emily M. Bender
{"title":"20 years of the Grammar Matrix: cross-linguistic hypothesis testing of increasingly complex interactions","authors":"Olga Zamaraeva, T. Trimble, Kristen Howell, Michael Wayne Goodman, Antske Fokkens, Guy Edward Toh Emerson, Christian M. Curtis, Emily M. Bender","doi":"10.15398/jlm.v10i1.292","DOIUrl":"https://doi.org/10.15398/jlm.v10i1.292","url":null,"abstract":"The Grammar Matrix project is a meta-grammar engineering framework expressed in Head-driven Phrase Structure Grammar (HPSG) and Minimal Recursion Semantics (MRS). It automates grammar implementation and is thus a tool and a resource for linguistic hypothesis testing at scale. In this paper, we summarize how the Grammar Matrix grew in the last decade and describe how new additions to the system have made it possible to study interactions between analyses, both monolingually and cross-linguistically, at new levels of complexity.","PeriodicalId":403597,"journal":{"name":"J. Lang. Model.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130881201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Introduction to the special section on the interaction between formal and computational linguistics 介绍形式语言学和计算语言学之间的相互作用
J. Lang. Model. Pub Date : 2022-10-20 DOI: 10.15398/jlm.v10i1.325
Timothée Bernard, G. Winterstein
{"title":"Introduction to the special section on the interaction between formal and computational linguistics","authors":"Timothée Bernard, G. Winterstein","doi":"10.15398/jlm.v10i1.325","DOIUrl":"https://doi.org/10.15398/jlm.v10i1.325","url":null,"abstract":"While computational linguistics is historically rooted in formal linguistics, it might seem that the distance between these two fields has only grown larger as each field evolved. Still, whether this impression is correct or not, not all links have been cut, and new ones have appeared. Indeed, while we are currently witnessing a growing interest within formal linguistics in both explaining the remarkable successes of neural-based language models and uncovering their limitations, one should not forget the contribution to theoretical linguistics provided, for example, by the computational implementation of grammatical formalisms. And while neural-based methods have recently received the lion’s share of the public attention, interpretable models based on symbolic methods are still relevant and widely used in the natural language processing industry. The links that exist between formal and computational linguistics have been the subject of discussion for a long time. At the 2009 European Meeting of the Association for Computational Linguistics, a workshop entitled “Interaction between Linguistics and Computational Linguistics: Virtuous, Vicious or Vacuous?” was organised. This workshop led to the publication a couple of years later of the sixth volume of Linguistic Issues in Language Technology (Baldwin and Kordoni 2011). At the centre of this publication were discussions about","PeriodicalId":403597,"journal":{"name":"J. Lang. Model.","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123510323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Reduplication with a Neural Network that Lacks Explicit Variables 用缺乏显式变量的神经网络学习重复
J. Lang. Model. Pub Date : 2022-03-31 DOI: 10.15398/jlm.v10i1.274
B. Prickett, Aaron Traylor, Joe Pater
{"title":"Learning Reduplication with a Neural Network that Lacks Explicit Variables","authors":"B. Prickett, Aaron Traylor, Joe Pater","doi":"10.15398/jlm.v10i1.274","DOIUrl":"https://doi.org/10.15398/jlm.v10i1.274","url":null,"abstract":"Reduplicative linguistic patterns have been used as evidence for explicit algebraic variables in models of cognition.1 Here, we show that a variable-free neural network can model these patterns in a way that predicts observed human behavior. Specifically, we successfully simulate the three experiments presented by Marcus et al. (1999), as well as Endress et al.’s (2007) partial replication of one of those experiments. We then explore the model’s ability to generalize reduplicative mappings to different kinds of novel inputs. Using Berent’s (2013) scopes of generalization as a metric, we claim that the model matches the scope of generalization that has been observed in humans. We argue that these results challenge past claims about the necessity of symbolic variables in models of cognition.","PeriodicalId":403597,"journal":{"name":"J. Lang. Model.","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121857576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Extraposed relative clauses in Role and Reference Grammar. An analysis using Tree Wrapping Grammars 角色与指称语法中的附加关系从句。使用树包装语法进行分析
J. Lang. Model. Pub Date : 2022-02-17 DOI: 10.15398/jlm.v9i2.255
Laura Kallmeyer
{"title":"Extraposed relative clauses in Role and Reference Grammar. An analysis using Tree Wrapping Grammars","authors":"Laura Kallmeyer","doi":"10.15398/jlm.v9i2.255","DOIUrl":"https://doi.org/10.15398/jlm.v9i2.255","url":null,"abstract":"This paper proposes an analysis of extraposed relative clauses in the framework of Role and Reference Grammar (RRG), adopting its formalization as a tree rewriting grammar, specifically as a Tree Wrapping Grammar (TWG). Extraposed relative clauses are a puzzle since the link to the antecedent noun can be rather non-local but it seems nevertheless appropriate to model it as a syntactic dependency and not a purely anaphoric relation. Moreover, certain types of determiners require their NP to be modified by a (possibly extraposed) relative clause, and any comprehensive framework should account for this. We show that the tree wrapping operation of TWG, which is conventionally used to fill argument slots out of which some elements have been extracted, can be used to model extraposed relative clauses. The analysis accounts for the non-locality of the phenomenon while capturing the link to the antecedent NP in a local way (i.e., within a single elementary tree).","PeriodicalId":403597,"journal":{"name":"J. Lang. Model.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124541731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信