Neural Language Taskonomy: Which NLP Tasks are the most Predictive of fMRI Brain Activity?

S. Oota, Jashn Arora, Veeral Agarwal, Mounika Marreddy, Manish Gupta, Raju Surampudi Bapi
{"title":"Neural Language Taskonomy: Which NLP Tasks are the most Predictive of fMRI Brain Activity?","authors":"S. Oota, Jashn Arora, Veeral Agarwal, Mounika Marreddy, Manish Gupta, Raju Surampudi Bapi","doi":"10.18653/v1/2022.naacl-main.235","DOIUrl":null,"url":null,"abstract":"Several popular Transformer based language models have been found to be successful for text-driven brain encoding. However, existing literature leverages only pretrained text Transformer models and has not explored the efficacy of task-specific learned Transformer representations. In this work, we explore transfer learning from representations learned for ten popular natural language processing tasks (two syntactic and eight semantic) for predicting brain responses from two diverse datasets: Pereira (subjects reading sentences from paragraphs) and Narratives (subjects listening to the spoken stories). Encoding models based on task features are used to predict activity in different regions across the whole brain. Features from coreference resolution, NER, and shallow syntax parsing explain greater variance for the reading activity. On the other hand, for the listening activity, tasks such as paraphrase generation, summarization, and natural language inference show better encoding performance. Experiments across all 10 task representations provide the following cognitive insights: (i) language left hemisphere has higher predictive brain activity versus language right hemisphere, (ii) posterior medial cortex, temporo-parieto-occipital junction, dorsal frontal lobe have higher correlation versus early auditory and auditory association cortex, (iii) syntactic and semantic tasks display a good predictive performance across brain regions for reading and listening stimuli resp.","PeriodicalId":382084,"journal":{"name":"North American Chapter of the Association for Computational Linguistics","volume":"106 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"North American Chapter of the Association for Computational Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2022.naacl-main.235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21

Abstract

Several popular Transformer based language models have been found to be successful for text-driven brain encoding. However, existing literature leverages only pretrained text Transformer models and has not explored the efficacy of task-specific learned Transformer representations. In this work, we explore transfer learning from representations learned for ten popular natural language processing tasks (two syntactic and eight semantic) for predicting brain responses from two diverse datasets: Pereira (subjects reading sentences from paragraphs) and Narratives (subjects listening to the spoken stories). Encoding models based on task features are used to predict activity in different regions across the whole brain. Features from coreference resolution, NER, and shallow syntax parsing explain greater variance for the reading activity. On the other hand, for the listening activity, tasks such as paraphrase generation, summarization, and natural language inference show better encoding performance. Experiments across all 10 task representations provide the following cognitive insights: (i) language left hemisphere has higher predictive brain activity versus language right hemisphere, (ii) posterior medial cortex, temporo-parieto-occipital junction, dorsal frontal lobe have higher correlation versus early auditory and auditory association cortex, (iii) syntactic and semantic tasks display a good predictive performance across brain regions for reading and listening stimuli resp.
神经语言任务:哪些NLP任务最能预测fMRI脑活动?
一些流行的基于Transformer的语言模型已经被发现能够成功地用于文本驱动的大脑编码。然而,现有文献仅利用了预训练的文本Transformer模型,并没有探索特定于任务的学习Transformer表示的有效性。在这项工作中,我们探索了从十个流行的自然语言处理任务(两个句法和八个语义)的表征中学习的迁移学习,以预测来自两个不同数据集的大脑反应:Pereira(受试者从段落中阅读句子)和Narratives(受试者听口语故事)。基于任务特征的编码模型被用来预测整个大脑不同区域的活动。来自共参考解析、NER和浅层语法解析的特征解释了阅读活动的更大差异。另一方面,在听力活动中,意译生成、摘要和自然语言推理等任务表现出更好的编码性能。所有10种任务表征的实验提供了以下认知洞察:(1)语言左半球比语言右半球具有更高的预测脑活动;(2)后内侧皮层、颞顶枕交界处、背额叶与早期听觉和听觉关联皮层具有更高的相关性;(3)句法和语义任务在阅读和听力刺激方面表现出良好的跨脑区域预测性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信