Visual representations supporting category-specific information about visual objects in the brain

Simon Faghel-Soubeyrand, Arjen Alink, E. Bamps, F. Gosselin, I. Charest
{"title":"Visual representations supporting category-specific information about visual objects in the brain","authors":"Simon Faghel-Soubeyrand, Arjen Alink, E. Bamps, F. Gosselin, I. Charest","doi":"10.32470/ccn.2019.1404-0","DOIUrl":null,"url":null,"abstract":"Over recent years, multivariate pattern analysis (“decoding”) approaches have become increasingly used to investigate “when” and “where” our brains conduct meaningful processes about their visual environments. Studies using time-resolved decoding of M/EEG patterns have described numerous processes such as object/face familiarity and the emergence of basic-to-abstract category information. Surprisingly, no study has, to our knowledge, revealed “what” (i.e. the actual visual information that) our brain uses while these computations are examined by decoding algorithms. Here, we revealed the time course at which our brain extracts realistic category-specific information about visual objects (i.e. emotion-type & gender information from faces) with time-resolved decoding of high-density EEG patterns, as well as carefully controlled tasks and visual stimulation. Then, we derived temporal generalization matrices and showed that category-specific information is 1) first diffused across brain areas (250 to 350 ms) and 2) encoded under a stable neural pattern that suggests evidence accumulation (350 to 650 ms after face onset). Finally, we bridged time-resolved decoding with psychophysics and revealed the specific visual information (spatial frequency, feature position & orientation information) that support these brain computations. Doing so, we uncovered interconnected dynamics between visual features, and the accumulation and diffusion of category-specific information in the brain.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Conference on Cognitive Computational Neuroscience","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32470/ccn.2019.1404-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Over recent years, multivariate pattern analysis (“decoding”) approaches have become increasingly used to investigate “when” and “where” our brains conduct meaningful processes about their visual environments. Studies using time-resolved decoding of M/EEG patterns have described numerous processes such as object/face familiarity and the emergence of basic-to-abstract category information. Surprisingly, no study has, to our knowledge, revealed “what” (i.e. the actual visual information that) our brain uses while these computations are examined by decoding algorithms. Here, we revealed the time course at which our brain extracts realistic category-specific information about visual objects (i.e. emotion-type & gender information from faces) with time-resolved decoding of high-density EEG patterns, as well as carefully controlled tasks and visual stimulation. Then, we derived temporal generalization matrices and showed that category-specific information is 1) first diffused across brain areas (250 to 350 ms) and 2) encoded under a stable neural pattern that suggests evidence accumulation (350 to 650 ms after face onset). Finally, we bridged time-resolved decoding with psychophysics and revealed the specific visual information (spatial frequency, feature position & orientation information) that support these brain computations. Doing so, we uncovered interconnected dynamics between visual features, and the accumulation and diffusion of category-specific information in the brain.
视觉表征支持大脑中关于视觉对象的特定类别信息
近年来,多元模式分析(“解码”)方法越来越多地用于研究我们的大脑在“何时”和“何地”对视觉环境进行有意义的处理。使用时间分辨解码M/EEG模式的研究描述了许多过程,如物体/面孔熟悉和基本到抽象类别信息的出现。令人惊讶的是,据我们所知,没有研究揭示了当解码算法检查这些计算时,我们的大脑使用了“什么”(即实际的视觉信息)。在这里,我们揭示了我们的大脑通过对高密度脑电图模式进行时间分辨解码,以及精心控制的任务和视觉刺激,提取视觉对象的真实类别特定信息(即面部的情绪类型和性别信息)的时间过程。然后,我们推导了时间泛化矩阵,并表明类别特异性信息首先在250至350毫秒的大脑区域扩散,2)在一个稳定的神经模式下编码,表明证据积累(面部出现后350至650毫秒)。最后,我们将时间分辨解码与心理物理学结合起来,揭示了支持这些大脑计算的特定视觉信息(空间频率、特征位置和方向信息)。通过这样做,我们发现了视觉特征与大脑中特定类别信息的积累和扩散之间的相互联系的动态。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信