ViSpa (Vision Spaces): A computer-vision-based representation system for individual images and concept prototypes, with large-scale evaluation.

IF 5.1 1区 心理学 Q1 PSYCHOLOGY
Fritz Günther, Marco Marelli, Sam Tureski, Marco Alessandro Petilli
{"title":"ViSpa (Vision Spaces): A computer-vision-based representation system for individual images and concept prototypes, with large-scale evaluation.","authors":"Fritz Günther,&nbsp;Marco Marelli,&nbsp;Sam Tureski,&nbsp;Marco Alessandro Petilli","doi":"10.1037/rev0000392","DOIUrl":null,"url":null,"abstract":"<p><p>Quantitative, data-driven models for mental representations have long enjoyed popularity and success in psychology (e.g., distributional semantic models in the language domain), but have largely been missing for the visual domain. To overcome this, we present ViSpa (Vision Spaces), high-dimensional vector spaces that include vision-based representation for naturalistic images as well as concept prototypes. These vectors are derived directly from visual stimuli through a deep convolutional neural network trained to classify images and allow us to compute vision-based similarity scores between any pair of images and/or concept prototypes. We successfully evaluate these similarities against human behavioral data in a series of large-scale studies, including off-line judgments-visual similarity judgments for the referents of word pairs (Study 1) and for image pairs (Study 2), and typicality judgments for images given a label (Study 3)-as well as online processing times and error rates in a discrimination (Study 4) and priming task (Study 5) with naturalistic image material. <i>ViSpa</i> similarities predict behavioral data across all tasks, which renders <i>ViSpa</i> a theoretically appealing model for vision-based representations and a valuable research tool for data analysis and the construction of experimental material: <i>ViSpa</i> allows for precise control over experimental material consisting of images and/or words denoting imageable concepts and introduces a specifically vision-based similarity for word pairs. To make <i>ViSpa</i> available to a wide audience, this article (a) includes (video) tutorials on how to use <i>ViSpa</i> in R and (b) presents a user-friendly web interface at http://vispa.fritzguenther.de. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":21016,"journal":{"name":"Psychological review","volume":"130 4","pages":"896-934"},"PeriodicalIF":5.1000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological review","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/rev0000392","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Quantitative, data-driven models for mental representations have long enjoyed popularity and success in psychology (e.g., distributional semantic models in the language domain), but have largely been missing for the visual domain. To overcome this, we present ViSpa (Vision Spaces), high-dimensional vector spaces that include vision-based representation for naturalistic images as well as concept prototypes. These vectors are derived directly from visual stimuli through a deep convolutional neural network trained to classify images and allow us to compute vision-based similarity scores between any pair of images and/or concept prototypes. We successfully evaluate these similarities against human behavioral data in a series of large-scale studies, including off-line judgments-visual similarity judgments for the referents of word pairs (Study 1) and for image pairs (Study 2), and typicality judgments for images given a label (Study 3)-as well as online processing times and error rates in a discrimination (Study 4) and priming task (Study 5) with naturalistic image material. ViSpa similarities predict behavioral data across all tasks, which renders ViSpa a theoretically appealing model for vision-based representations and a valuable research tool for data analysis and the construction of experimental material: ViSpa allows for precise control over experimental material consisting of images and/or words denoting imageable concepts and introduces a specifically vision-based similarity for word pairs. To make ViSpa available to a wide audience, this article (a) includes (video) tutorials on how to use ViSpa in R and (b) presents a user-friendly web interface at http://vispa.fritzguenther.de. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

ViSpa(视觉空间):一个基于计算机视觉的个体图像和概念原型表示系统,具有大规模评估功能。
长期以来,心理表征的定量数据驱动模型在心理学中一直很受欢迎和成功(例如,语言领域的分布语义模型),但在视觉领域却基本上缺失。为了克服这个问题,我们提出了ViSpa(视觉空间),这是一种高维向量空间,包括基于视觉的自然图像表示以及概念原型。这些向量直接来源于视觉刺激,通过深度卷积神经网络进行图像分类,并允许我们计算任何一对图像和/或概念原型之间基于视觉的相似性得分。我们在一系列大规模研究中成功地评估了人类行为数据的这些相似性,包括离线判断——对词对(研究1)和图像对(研究2)的视觉相似性判断,对给定标签的图像的典型化判断(研究3)——以及使用自然图像材料的判别(研究4)和启动任务(研究5)的在线处理时间和错误率。ViSpa相似性预测了所有任务中的行为数据,这使得ViSpa在理论上成为基于视觉表示的有吸引力的模型,也是数据分析和实验材料构建的有价值的研究工具:ViSpa允许对由图像和/或表示可想象概念的单词组成的实验材料进行精确控制,并为单词对引入了特定的基于视觉的相似性。为了使更广泛的受众能够使用ViSpa,本文(a)包含了如何在R中使用ViSpa的(视频)教程,(b)在http://vispa.fritzguenther.de上提供了一个用户友好的web界面。(PsycInfo数据库记录(c) 2023 APA,版权所有)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Psychological review
Psychological review 医学-心理学
CiteScore
9.70
自引率
5.60%
发文量
97
期刊介绍: Psychological Review publishes articles that make important theoretical contributions to any area of scientific psychology, including systematic evaluation of alternative theories.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信