看不见的图像:在数据驱动的计算模型中推断抽象和具体单词的视觉表示。

Fritz Günther, Marco Alessandro Petilli, Alessandra Vergallito, Marco Marelli
{"title":"看不见的图像:在数据驱动的计算模型中推断抽象和具体单词的视觉表示。","authors":"Fritz Günther,&nbsp;Marco Alessandro Petilli,&nbsp;Alessandra Vergallito,&nbsp;Marco Marelli","doi":"10.1007/s00426-020-01429-7","DOIUrl":null,"url":null,"abstract":"<p><p>Theories of grounded cognition assume that conceptual representations are grounded in sensorimotor experience. However, abstract concepts such as jealousy or childhood have no directly associated referents with which such sensorimotor experience can be made; therefore, the grounding of abstract concepts has long been a topic of debate. Here, we propose (a) that systematic relations exist between semantic representations learned from language on the one hand and perceptual experience on the other hand, (b) that these relations can be learned in a bottom-up fashion, and (c) that it is possible to extrapolate from this learning experience to predict expected perceptual representations for words even where direct experience is missing. To test this, we implement a data-driven computational model that is trained to map language-based representations (obtained from text corpora, representing language experience) onto vision-based representations (obtained from an image database, representing perceptual experience), and apply its mapping function onto language-based representations for abstract and concrete words outside the training set. In three experiments, we present participants with these words, accompanied by two images: the image predicted by the model and a random control image. Results show that participants' judgements were in line with model predictions even for the most abstract words. This preference was stronger for more concrete items and decreased for the more abstract ones. Taken together, our findings have substantial implications in support of the grounding of abstract words, suggesting that we can tap into our previous experience to create possible visual representation we don't have.</p>","PeriodicalId":501681,"journal":{"name":"Psychological Research","volume":" ","pages":"2512-2532"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s00426-020-01429-7","citationCount":"12","resultStr":"{\"title\":\"Images of the unseen: extrapolating visual representations for abstract and concrete words in a data-driven computational model.\",\"authors\":\"Fritz Günther,&nbsp;Marco Alessandro Petilli,&nbsp;Alessandra Vergallito,&nbsp;Marco Marelli\",\"doi\":\"10.1007/s00426-020-01429-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Theories of grounded cognition assume that conceptual representations are grounded in sensorimotor experience. However, abstract concepts such as jealousy or childhood have no directly associated referents with which such sensorimotor experience can be made; therefore, the grounding of abstract concepts has long been a topic of debate. Here, we propose (a) that systematic relations exist between semantic representations learned from language on the one hand and perceptual experience on the other hand, (b) that these relations can be learned in a bottom-up fashion, and (c) that it is possible to extrapolate from this learning experience to predict expected perceptual representations for words even where direct experience is missing. To test this, we implement a data-driven computational model that is trained to map language-based representations (obtained from text corpora, representing language experience) onto vision-based representations (obtained from an image database, representing perceptual experience), and apply its mapping function onto language-based representations for abstract and concrete words outside the training set. In three experiments, we present participants with these words, accompanied by two images: the image predicted by the model and a random control image. Results show that participants' judgements were in line with model predictions even for the most abstract words. This preference was stronger for more concrete items and decreased for the more abstract ones. Taken together, our findings have substantial implications in support of the grounding of abstract words, suggesting that we can tap into our previous experience to create possible visual representation we don't have.</p>\",\"PeriodicalId\":501681,\"journal\":{\"name\":\"Psychological Research\",\"volume\":\" \",\"pages\":\"2512-2532\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1007/s00426-020-01429-7\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Psychological Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1007/s00426-020-01429-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological Research","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1007/s00426-020-01429-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

摘要

基础认知理论假设概念表征以感觉运动经验为基础。然而,像嫉妒或童年这样的抽象概念没有直接相关的指涉物来产生这种感觉运动经验;因此,抽象概念的基础一直是争论的话题。在这里,我们提出(a)从语言中学习到的语义表征与感知经验之间存在系统的关系,(b)这些关系可以以自下而上的方式学习,(c)即使在缺乏直接经验的情况下,也可以从这种学习经验中推断出对单词的预期感知表征。为了测试这一点,我们实现了一个数据驱动的计算模型,该模型被训练成将基于语言的表示(从文本语料库获得,表示语言经验)映射到基于视觉的表示(从图像数据库获得,表示感知经验),并将其映射功能应用到基于语言的表示上,用于训练集之外的抽象和具体单词。在三个实验中,我们向参与者呈现这些单词,并伴随着两幅图像:模型预测的图像和随机控制图像。结果表明,即使是最抽象的单词,参与者的判断也与模型预测一致。这种偏好对更具体的事物更强烈,而对更抽象的事物则更弱。综上所述,我们的研究结果对支持抽象词汇的基础具有重大意义,表明我们可以利用以前的经验来创造我们没有的可能的视觉表现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Images of the unseen: extrapolating visual representations for abstract and concrete words in a data-driven computational model.

Theories of grounded cognition assume that conceptual representations are grounded in sensorimotor experience. However, abstract concepts such as jealousy or childhood have no directly associated referents with which such sensorimotor experience can be made; therefore, the grounding of abstract concepts has long been a topic of debate. Here, we propose (a) that systematic relations exist between semantic representations learned from language on the one hand and perceptual experience on the other hand, (b) that these relations can be learned in a bottom-up fashion, and (c) that it is possible to extrapolate from this learning experience to predict expected perceptual representations for words even where direct experience is missing. To test this, we implement a data-driven computational model that is trained to map language-based representations (obtained from text corpora, representing language experience) onto vision-based representations (obtained from an image database, representing perceptual experience), and apply its mapping function onto language-based representations for abstract and concrete words outside the training set. In three experiments, we present participants with these words, accompanied by two images: the image predicted by the model and a random control image. Results show that participants' judgements were in line with model predictions even for the most abstract words. This preference was stronger for more concrete items and decreased for the more abstract ones. Taken together, our findings have substantial implications in support of the grounding of abstract words, suggesting that we can tap into our previous experience to create possible visual representation we don't have.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信