人类行为、大脑和深度神经网络中机车动作支持的表征。

IF 9.1 1区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES
Clemens G Bartnik, Christina Sartzetaki, Abel Puigseslloses Sanchez, Elijah Molenkamp, Steven Bommer, Nikolina Vukšić, Iris I A Groen
{"title":"人类行为、大脑和深度神经网络中机车动作支持的表征。","authors":"Clemens G Bartnik, Christina Sartzetaki, Abel Puigseslloses Sanchez, Elijah Molenkamp, Steven Bommer, Nikolina Vukšić, Iris I A Groen","doi":"10.1073/pnas.2414005122","DOIUrl":null,"url":null,"abstract":"<p><p>To decide how to move around the world, we must determine which locomotive actions (e.g., walking, swimming, or climbing) are afforded by the immediate visual environment. The neural basis of our ability to recognize locomotive affordances is unknown. Here, we compare human behavioral annotations, functional MRI (fMRI) measurements, and deep neural network (DNN) activations to both indoor and outdoor real-world images to demonstrate that the human visual cortex represents locomotive action affordances in complex visual scenes. Hierarchical clustering of behavioral annotations of six possible locomotive actions show that humans group environments into distinct affordance clusters using at least three separate dimensions. Representational similarity analysis of multivoxel fMRI responses in the scene-selective visual cortex shows that perceived locomotive affordances are represented independently from other scene properties such as objects, surface materials, scene category, or global properties and independent of the task performed in the scanner. Visual feature activations from DNNs trained on object or scene classification as well as a range of other visual understanding tasks correlate comparatively lower with behavioral and neural representations of locomotive affordances than with object representations. Training DNNs directly on affordance labels or using affordance-centered language embeddings increases alignment with human behavior, but none of the tested models fully captures locomotive action affordance perception. These results uncover a type of representation in the human brain that reflects locomotive action affordances.</p>","PeriodicalId":20548,"journal":{"name":"Proceedings of the National Academy of Sciences of the United States of America","volume":"122 24","pages":"e2414005122"},"PeriodicalIF":9.1000,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12184334/pdf/","citationCount":"0","resultStr":"{\"title\":\"Representation of locomotive action affordances in human behavior, brains, and deep neural networks.\",\"authors\":\"Clemens G Bartnik, Christina Sartzetaki, Abel Puigseslloses Sanchez, Elijah Molenkamp, Steven Bommer, Nikolina Vukšić, Iris I A Groen\",\"doi\":\"10.1073/pnas.2414005122\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>To decide how to move around the world, we must determine which locomotive actions (e.g., walking, swimming, or climbing) are afforded by the immediate visual environment. The neural basis of our ability to recognize locomotive affordances is unknown. Here, we compare human behavioral annotations, functional MRI (fMRI) measurements, and deep neural network (DNN) activations to both indoor and outdoor real-world images to demonstrate that the human visual cortex represents locomotive action affordances in complex visual scenes. Hierarchical clustering of behavioral annotations of six possible locomotive actions show that humans group environments into distinct affordance clusters using at least three separate dimensions. Representational similarity analysis of multivoxel fMRI responses in the scene-selective visual cortex shows that perceived locomotive affordances are represented independently from other scene properties such as objects, surface materials, scene category, or global properties and independent of the task performed in the scanner. Visual feature activations from DNNs trained on object or scene classification as well as a range of other visual understanding tasks correlate comparatively lower with behavioral and neural representations of locomotive affordances than with object representations. Training DNNs directly on affordance labels or using affordance-centered language embeddings increases alignment with human behavior, but none of the tested models fully captures locomotive action affordance perception. These results uncover a type of representation in the human brain that reflects locomotive action affordances.</p>\",\"PeriodicalId\":20548,\"journal\":{\"name\":\"Proceedings of the National Academy of Sciences of the United States of America\",\"volume\":\"122 24\",\"pages\":\"e2414005122\"},\"PeriodicalIF\":9.1000,\"publicationDate\":\"2025-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12184334/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the National Academy of Sciences of the United States of America\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1073/pnas.2414005122\",\"RegionNum\":1,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/6/12 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the National Academy of Sciences of the United States of America","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1073/pnas.2414005122","RegionNum":1,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/12 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

为了决定如何在世界各地移动,我们必须确定哪些机车动作(例如,步行,游泳或攀登)是由直接的视觉环境提供的。我们识别机车启示能力的神经基础是未知的。在这里,我们将人类的行为注释、功能MRI (fMRI)测量和深度神经网络(DNN)激活与室内和室外真实世界的图像进行比较,以证明人类视觉皮层在复杂的视觉场景中代表了机车动作的支持。六种可能的机车动作的行为注释的分层聚类表明,人类至少使用三个单独的维度将环境分为不同的功能簇。对场景选择视觉皮层中多体素fMRI反应的表征相似性分析表明,感知到的机车可视性独立于其他场景属性(如物体、表面材料、场景类别或全局属性),也独立于扫描仪中执行的任务。经过对象或场景分类训练的dnn的视觉特征激活以及一系列其他视觉理解任务与机车可视性的行为和神经表征的相关性相对较低,而与对象表征的相关性较低。直接在可视性标签上训练dnn或使用以可视性为中心的语言嵌入增加了与人类行为的一致性,但测试的模型都没有完全捕获机车动作可视性感知。这些结果揭示了人类大脑中一种反映机车动作启示的表征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Representation of locomotive action affordances in human behavior, brains, and deep neural networks.

To decide how to move around the world, we must determine which locomotive actions (e.g., walking, swimming, or climbing) are afforded by the immediate visual environment. The neural basis of our ability to recognize locomotive affordances is unknown. Here, we compare human behavioral annotations, functional MRI (fMRI) measurements, and deep neural network (DNN) activations to both indoor and outdoor real-world images to demonstrate that the human visual cortex represents locomotive action affordances in complex visual scenes. Hierarchical clustering of behavioral annotations of six possible locomotive actions show that humans group environments into distinct affordance clusters using at least three separate dimensions. Representational similarity analysis of multivoxel fMRI responses in the scene-selective visual cortex shows that perceived locomotive affordances are represented independently from other scene properties such as objects, surface materials, scene category, or global properties and independent of the task performed in the scanner. Visual feature activations from DNNs trained on object or scene classification as well as a range of other visual understanding tasks correlate comparatively lower with behavioral and neural representations of locomotive affordances than with object representations. Training DNNs directly on affordance labels or using affordance-centered language embeddings increases alignment with human behavior, but none of the tested models fully captures locomotive action affordance perception. These results uncover a type of representation in the human brain that reflects locomotive action affordances.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
19.00
自引率
0.90%
发文量
3575
审稿时长
2.5 months
期刊介绍: The Proceedings of the National Academy of Sciences (PNAS), a peer-reviewed journal of the National Academy of Sciences (NAS), serves as an authoritative source for high-impact, original research across the biological, physical, and social sciences. With a global scope, the journal welcomes submissions from researchers worldwide, making it an inclusive platform for advancing scientific knowledge.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信