Strong Generative Capacity, Weak Generative Capacity, and Modern Linguistic Theories

R. Berwick
{"title":"Strong Generative Capacity, Weak Generative Capacity, and Modern Linguistic Theories","authors":"R. Berwick","doi":"10.5555/970170.970175","DOIUrl":null,"url":null,"abstract":"What makes a language a natural language? A longstanding tradition in generative grammar holds that a language is natural just in case it is learnable under a constellation of auxiliary assumptions about input evidence available to children. Yet another approach seeks some key mathematical property that distinguishes the natural languages from all possible symbol-systems. With some exceptions for example, Chomsky's demonstration that a complete characterization of our grammatical knowledge lies beyond the power of finite state languages the mathematical approach has not provided clear-cut results. For example, for a variety of reasons we cannot say that the predicate is context-free characterizes all and only the natural languages. Still another use of mathematical analysis in linguistics has been to diagnose a proposed grammatical formalism as too powerful (allowing too many grammars or languages) rather than as too weak. Such a diagnosis was supposed by some to follow from Peters and Ritchie's demonstration that the theory of transformational grammar as described in Chomsky's Aspects of the Theory of Syntax could specify grammars to generate any recursively enumerable set. For some this demonstration marked a watershed in the formal analysis transformational grammar. One general reaction (not prompted by the Peters and Ritchie result alone) was to turn to other theories of grammar designed to explicitly avoid the problems of a theory that could specify an arbitrary Turing machine computation. The proposals for generalized phrase structure grammar (GPSG) and lexical-functional grammar (LFG) have explicitly emphasized this point. GPSG aims for grammars that generate context-free languages (though there is some recent wavering on this point; see Pullum 1984); LFG, for languages that are at worst context-sensitive. Whatever the merits of the arguments for this restriction in terms of weak generative capacity and they are far from obvious, as discussed at length in Berwick and Weinberg (1983) one point remains: the switch was prompted by criticism of the nearly two-decades old Aspects theory. Much has changed in transformational grammar in twenty years. Modern transformational grammars no longer contain swarms of individual rules such as Passive, Raising, or Dative. The modern government-binding (GB) theory does not reconstruct a \"deep structure\", does not contain powerful deletion rules, and has introduced a whole host of new constraints. Given these sweeping changes, it would seem appropriate, then, to re-examine the Peters and Ritchie result, and compare the power of the newer GB-style theories to these other current linguistic theories. That is the aim of this paper. The basic points to be made are these:","PeriodicalId":360119,"journal":{"name":"Comput. Linguistics","volume":"302 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1984-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Comput. Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5555/970170.970175","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

Abstract

What makes a language a natural language? A longstanding tradition in generative grammar holds that a language is natural just in case it is learnable under a constellation of auxiliary assumptions about input evidence available to children. Yet another approach seeks some key mathematical property that distinguishes the natural languages from all possible symbol-systems. With some exceptions for example, Chomsky's demonstration that a complete characterization of our grammatical knowledge lies beyond the power of finite state languages the mathematical approach has not provided clear-cut results. For example, for a variety of reasons we cannot say that the predicate is context-free characterizes all and only the natural languages. Still another use of mathematical analysis in linguistics has been to diagnose a proposed grammatical formalism as too powerful (allowing too many grammars or languages) rather than as too weak. Such a diagnosis was supposed by some to follow from Peters and Ritchie's demonstration that the theory of transformational grammar as described in Chomsky's Aspects of the Theory of Syntax could specify grammars to generate any recursively enumerable set. For some this demonstration marked a watershed in the formal analysis transformational grammar. One general reaction (not prompted by the Peters and Ritchie result alone) was to turn to other theories of grammar designed to explicitly avoid the problems of a theory that could specify an arbitrary Turing machine computation. The proposals for generalized phrase structure grammar (GPSG) and lexical-functional grammar (LFG) have explicitly emphasized this point. GPSG aims for grammars that generate context-free languages (though there is some recent wavering on this point; see Pullum 1984); LFG, for languages that are at worst context-sensitive. Whatever the merits of the arguments for this restriction in terms of weak generative capacity and they are far from obvious, as discussed at length in Berwick and Weinberg (1983) one point remains: the switch was prompted by criticism of the nearly two-decades old Aspects theory. Much has changed in transformational grammar in twenty years. Modern transformational grammars no longer contain swarms of individual rules such as Passive, Raising, or Dative. The modern government-binding (GB) theory does not reconstruct a "deep structure", does not contain powerful deletion rules, and has introduced a whole host of new constraints. Given these sweeping changes, it would seem appropriate, then, to re-examine the Peters and Ritchie result, and compare the power of the newer GB-style theories to these other current linguistic theories. That is the aim of this paper. The basic points to be made are these:
强生成能力、弱生成能力与现代语言学理论
是什么使一门语言成为自然语言?生成语法的一个长期传统认为,一种语言是自然的,只要它是在一系列关于儿童可用的输入证据的辅助假设下可学习的。然而,另一种方法寻求一些关键的数学性质,将自然语言与所有可能的符号系统区分开来。除了一些例外,例如,乔姆斯基证明,我们的语法知识的完整表征超出了有限状态语言的能力,数学方法并没有提供明确的结果。例如,由于各种原因,我们不能说谓词是上下文无关的,它只能描述所有的自然语言。数学分析在语言学中的另一个用途是诊断一种拟议的语法形式主义是过于强大(允许太多的语法或语言),而不是太弱。这种诊断被一些人认为是从彼得斯和里奇的论证中得出的,乔姆斯基的《语法理论的各个方面》中描述的转换语法理论可以指定语法来生成任何递归可枚举集合。对于一些人来说,这个演示标志着形式分析转换语法的分水岭。一个普遍的反应(不仅仅是由彼得斯和里奇的结果引起的)是转向其他语法理论,这些理论旨在明确地避免可以指定任意图灵机计算的理论的问题。广义短语结构语法(GPSG)和词汇功能语法(LFG)的提出都明确强调了这一点。GPSG的目标是生成与上下文无关的语言的语法(尽管最近在这一点上有一些动摇;参见Pullum 1984);LFG,用于最坏的情况下对上下文敏感的语言。无论从弱生产能力的角度来看,这种限制的论点有什么优点,而且它们远不明显,正如Berwick和Weinberg(1983)中详细讨论的那样,有一点仍然存在:这种转变是由对近20年前的Aspects理论的批评引起的。二十年来,变换语法发生了很大的变化。现代转换语法不再包含大量的单独规则,如被动、提升或与格。现代政府约束(GB)理论没有重建一个“深层结构”,没有包含强大的删除规则,并且引入了一大堆新的约束。考虑到这些巨大的变化,重新审视彼得斯和里奇的结果,并将新的gb风格理论的力量与其他当前的语言理论进行比较,似乎是合适的。这就是本文的目的。要提出的基本观点如下:
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信