Raj Magesh Gauthaman, Brice Ménard, Michael F. Bonner
{"title":"Universal scale-free representations in human visual cortex","authors":"Raj Magesh Gauthaman, Brice Ménard, Michael F. Bonner","doi":"arxiv-2409.06843","DOIUrl":null,"url":null,"abstract":"How does the human visual cortex encode sensory information? To address this\nquestion, we explore the covariance structure of neural representations. We\nperform a cross-decomposition analysis of fMRI responses to natural images in\nmultiple individuals from the Natural Scenes Dataset and find that neural\nrepresentations systematically exhibit a power-law covariance spectrum over\nfour orders of magnitude in ranks. This scale-free structure is found in\nmultiple regions along the visual hierarchy, pointing to the existence of a\ngeneric encoding strategy in visual cortex. We also show that, up to a\nrotation, a large ensemble of principal axes of these population codes are\nshared across subjects, showing the existence of a universal high-dimensional\nrepresentation. This suggests a high level of convergence in how the human\nbrain learns to represent natural scenes despite individual differences in\nneuroanatomy and experience. We further demonstrate that a spectral approach is\ncritical for characterizing population codes in their full extent, and in doing\nso, we reveal a vast space of uncharted dimensions that have been out of reach\nfor conventional variance-weighted methods. A global view of neural\nrepresentations thus requires embracing their high-dimensional nature and\nunderstanding them statistically rather than through visual or semantic\ninterpretation of individual dimensions.","PeriodicalId":501266,"journal":{"name":"arXiv - QuanBio - Quantitative Methods","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Quantitative Methods","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06843","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
How does the human visual cortex encode sensory information? To address this
question, we explore the covariance structure of neural representations. We
perform a cross-decomposition analysis of fMRI responses to natural images in
multiple individuals from the Natural Scenes Dataset and find that neural
representations systematically exhibit a power-law covariance spectrum over
four orders of magnitude in ranks. This scale-free structure is found in
multiple regions along the visual hierarchy, pointing to the existence of a
generic encoding strategy in visual cortex. We also show that, up to a
rotation, a large ensemble of principal axes of these population codes are
shared across subjects, showing the existence of a universal high-dimensional
representation. This suggests a high level of convergence in how the human
brain learns to represent natural scenes despite individual differences in
neuroanatomy and experience. We further demonstrate that a spectral approach is
critical for characterizing population codes in their full extent, and in doing
so, we reveal a vast space of uncharted dimensions that have been out of reach
for conventional variance-weighted methods. A global view of neural
representations thus requires embracing their high-dimensional nature and
understanding them statistically rather than through visual or semantic
interpretation of individual dimensions.