用深度神经语言模型跟踪自然语言预测

Micha Heilbron, Benedikt V. Ehinger, P. Hagoort, F. P. Lange
{"title":"用深度神经语言模型跟踪自然语言预测","authors":"Micha Heilbron, Benedikt V. Ehinger, P. Hagoort, F. P. Lange","doi":"10.32470/CCN.2019.1096-0","DOIUrl":null,"url":null,"abstract":"Prediction in language has traditionally been studied using simple designs in which neural responses to expected and unexpected words are compared in a categorical fashion. However, these designs have been contested as being `prediction encouraging', potentially exaggerating the importance of prediction in language understanding. A few recent studies have begun to address these worries by using model-based approaches to probe the effects of linguistic predictability in naturalistic stimuli (e.g. continuous narrative). However, these studies so far only looked at very local forms of prediction, using models that take no more than the prior two words into account when computing a word's predictability. Here, we extend this approach using a state-of-the-art neural language model that can take roughly 500 times longer linguistic contexts into account. Predictability estimates from the neural network offer a much better fit to EEG data from subjects listening to naturalistic narrative than simpler models, and reveal strong surprise responses akin to the P200 and N400. These results show that predictability effects in language are not a side-effect of simple designs, and demonstrate the practical use of recent advances in AI for the cognitive neuroscience of language.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Tracking Naturalistic Linguistic Predictions with Deep Neural Language Models\",\"authors\":\"Micha Heilbron, Benedikt V. Ehinger, P. Hagoort, F. P. Lange\",\"doi\":\"10.32470/CCN.2019.1096-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Prediction in language has traditionally been studied using simple designs in which neural responses to expected and unexpected words are compared in a categorical fashion. However, these designs have been contested as being `prediction encouraging', potentially exaggerating the importance of prediction in language understanding. A few recent studies have begun to address these worries by using model-based approaches to probe the effects of linguistic predictability in naturalistic stimuli (e.g. continuous narrative). However, these studies so far only looked at very local forms of prediction, using models that take no more than the prior two words into account when computing a word's predictability. Here, we extend this approach using a state-of-the-art neural language model that can take roughly 500 times longer linguistic contexts into account. Predictability estimates from the neural network offer a much better fit to EEG data from subjects listening to naturalistic narrative than simpler models, and reveal strong surprise responses akin to the P200 and N400. These results show that predictability effects in language are not a side-effect of simple designs, and demonstrate the practical use of recent advances in AI for the cognitive neuroscience of language.\",\"PeriodicalId\":281121,\"journal\":{\"name\":\"2019 Conference on Cognitive Computational Neuroscience\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 Conference on Cognitive Computational Neuroscience\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.32470/CCN.2019.1096-0\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Conference on Cognitive Computational Neuroscience","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32470/CCN.2019.1096-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

摘要

语言预测传统上是用简单的设计来研究的,在这种设计中,神经对预期和意外单词的反应以分类的方式进行比较。然而,这些设计被认为是“鼓励预测”,可能夸大了预测在语言理解中的重要性。最近的一些研究已经开始通过使用基于模型的方法来探讨语言可预测性在自然刺激(例如连续叙事)中的影响来解决这些担忧。然而,到目前为止,这些研究只关注了非常局部的预测形式,在计算一个单词的可预测性时,使用的模型只考虑了前两个单词。在这里,我们使用最先进的神经语言模型扩展了这种方法,该模型可以考虑大约500倍长的语言上下文。与简单的模型相比,来自神经网络的可预测性估计更适合于听自然叙事的受试者的脑电图数据,并揭示出类似于P200和N400的强烈惊讶反应。这些结果表明,语言的可预测性效应并不是简单设计的副作用,并展示了人工智能在语言认知神经科学方面的最新进展的实际应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Tracking Naturalistic Linguistic Predictions with Deep Neural Language Models
Prediction in language has traditionally been studied using simple designs in which neural responses to expected and unexpected words are compared in a categorical fashion. However, these designs have been contested as being `prediction encouraging', potentially exaggerating the importance of prediction in language understanding. A few recent studies have begun to address these worries by using model-based approaches to probe the effects of linguistic predictability in naturalistic stimuli (e.g. continuous narrative). However, these studies so far only looked at very local forms of prediction, using models that take no more than the prior two words into account when computing a word's predictability. Here, we extend this approach using a state-of-the-art neural language model that can take roughly 500 times longer linguistic contexts into account. Predictability estimates from the neural network offer a much better fit to EEG data from subjects listening to naturalistic narrative than simpler models, and reveal strong surprise responses akin to the P200 and N400. These results show that predictability effects in language are not a side-effect of simple designs, and demonstrate the practical use of recent advances in AI for the cognitive neuroscience of language.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信