评估通用依赖解析器的认知相关生产力

Sagar Indurkhya, R. Berwick
{"title":"评估通用依赖解析器的认知相关生产力","authors":"Sagar Indurkhya, R. Berwick","doi":"10.1109/ICCICC53683.2021.9811322","DOIUrl":null,"url":null,"abstract":"A key goal of cognitive computing is to correctly model human language. Recently, much has been made of the ability of deep neural nets trained on huge datasets to precisely parse sentences. But do these systems truly incorporate human knowledge of language? In this paper we apply a standard linguistic methodology, transformational analysis, to determine whether this claim is accurate. On this view, if a deep net parser operates properly on one kind of sentence, it should also work correctly on its transformed counterpart. Applying this to a standard set of statement-question transformed sentence pairs, we find that a state of the art neural network system does not replicate human behavior and makes numerous errors. We suggest that this kind of test is more relevant for highlighting what deep neural networks can and cannot do with respect to human language.","PeriodicalId":101653,"journal":{"name":"2021 IEEE 20th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Evaluating the Cognitively-Related Productivity of a Universal Dependency Parser\",\"authors\":\"Sagar Indurkhya, R. Berwick\",\"doi\":\"10.1109/ICCICC53683.2021.9811322\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A key goal of cognitive computing is to correctly model human language. Recently, much has been made of the ability of deep neural nets trained on huge datasets to precisely parse sentences. But do these systems truly incorporate human knowledge of language? In this paper we apply a standard linguistic methodology, transformational analysis, to determine whether this claim is accurate. On this view, if a deep net parser operates properly on one kind of sentence, it should also work correctly on its transformed counterpart. Applying this to a standard set of statement-question transformed sentence pairs, we find that a state of the art neural network system does not replicate human behavior and makes numerous errors. We suggest that this kind of test is more relevant for highlighting what deep neural networks can and cannot do with respect to human language.\",\"PeriodicalId\":101653,\"journal\":{\"name\":\"2021 IEEE 20th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)\",\"volume\":\"57 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 20th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCICC53683.2021.9811322\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 20th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCICC53683.2021.9811322","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

认知计算的一个关键目标是正确地模拟人类语言。最近,深度神经网络在海量数据集上的精确解析能力得到了很大的研究。但是这些系统真的包含了人类的语言知识吗?在本文中,我们采用标准的语言学方法,转换分析,以确定这一说法是否准确。根据这种观点,如果深度网络解析器在一种句子上正确地工作,那么它也应该正确地处理转换后的对应句子。将其应用于一组标准的陈述句-问题转换句子对,我们发现最先进的神经网络系统不能复制人类的行为,并且会产生许多错误。我们认为,这种测试与突出深度神经网络在人类语言方面能做什么和不能做什么更相关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evaluating the Cognitively-Related Productivity of a Universal Dependency Parser
A key goal of cognitive computing is to correctly model human language. Recently, much has been made of the ability of deep neural nets trained on huge datasets to precisely parse sentences. But do these systems truly incorporate human knowledge of language? In this paper we apply a standard linguistic methodology, transformational analysis, to determine whether this claim is accurate. On this view, if a deep net parser operates properly on one kind of sentence, it should also work correctly on its transformed counterpart. Applying this to a standard set of statement-question transformed sentence pairs, we find that a state of the art neural network system does not replicate human behavior and makes numerous errors. We suggest that this kind of test is more relevant for highlighting what deep neural networks can and cannot do with respect to human language.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信