Neurons, behavior, data analysis and theory最新文献

筛选
英文 中文
Modelling Spontaneous Firing Activity of the Motor Cortex in a Spiking Neural Network with Random and Local Connectivity 具有随机和局部连通性的脉冲神经网络中运动皮层自发放电活动的建模
Neurons, behavior, data analysis and theory Pub Date : 2023-06-26 DOI: 10.51628/001c.82127
Lysea Haggie, Thor Besier, Angus JC McMorland
{"title":"Modelling Spontaneous Firing Activity of the Motor Cortex in a Spiking Neural Network with Random and Local Connectivity","authors":"Lysea Haggie, Thor Besier, Angus JC McMorland","doi":"10.51628/001c.82127","DOIUrl":"https://doi.org/10.51628/001c.82127","url":null,"abstract":"Computational models of cortical activity can provide insight into the mechanisms of higher-order processing in the human brain including planning, perception and the control of movement. Activity in the cortex is ongoing even in the absence of sensory input or discernable movements and is thought to be linked to the topology of the underlying cortical circuitry. However, the connectivity and its functional role in the generation of spatio-temporal firing patterns and cortical computations are still vastly unknown. Movement of the body is a key function of the brain, with the motor cortex the main cortical area implicated in the generation of movement. We built a spiking neural network model of the motor cortex which incorporates a laminar structure and circuitry based on a previous cortical model by Potjans & Diesmann (2014). A local connectivity scheme was implemented to introduce more physiological plausbility to the cortex model, and the effect on the rates, distributions and irregularity of neuronal firing, was compared to the original random connectivity method and experimental data. Local connectivity increased the distribution of and overall rate of neuronal firing. It also resulted in the irregularity of firing being more similar to those observed in experimental measurements, and a reduction in the variability in power spectrum measures. The larger variability in dynamical behaviour of the local connectivity model suggests that the topological structure of the connections in neuronal population plays a significant role in firing patterns during spontaneous activity. This model aims to take steps towards replicating the macroscopic network of the motor cortex, replicating realistic firing in order to shed light on information coding in the cortex. Large scale computational models such as this one can capture how structure and function relate to observable neuronal firing behaviour, and investigates the underlying computational mechanisms of the brain.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134933699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expressive architectures enhance interpretability of dynamics-based neural population models 表达性架构增强了基于动态的神经种群模型的可解释性
Neurons, behavior, data analysis and theory Pub Date : 2023-03-28 DOI: 10.51628/001c.73987
Andrew R. Sedler, Christopher Versteeg, Chethan Pandarinath
{"title":"Expressive architectures enhance interpretability of dynamics-based neural population models","authors":"Andrew R. Sedler, Christopher Versteeg, Chethan Pandarinath","doi":"10.51628/001c.73987","DOIUrl":"https://doi.org/10.51628/001c.73987","url":null,"abstract":"Artificial neural networks that can recover latent dynamics from recorded neural activity may provide a powerful avenue for identifying and interpreting the dynamical motifs underlying biological computation. Given that neural variance alone does not uniquely determine a latent dynamical system, interpretable architectures should prioritize accurate and low-dimensional latent dynamics. In this work, we evaluated the performance of sequential autoencoders (SAEs) in recovering latent chaotic attractors from simulated neural datasets. We found that SAEs with widely-used recurrent neural network (RNN)-based dynamics were unable to infer accurate firing rates at the true latent state dimensionality, and that larger RNNs relied upon dynamical features not present in the data. On the other hand, SAEs with neural ordinary differential equation (NODE)-based dynamics inferred accurate rates at the true latent state dimensionality, while also recovering latent trajectories and fixed point structure. Ablations reveal that this is mainly because NODEs (1) allow use of higher-capacity multi-layer perceptrons (MLPs) to model the vector field and (2) predict the derivative rather than the next state. Decoupling the capacity of the dynamics model from its latent dimensionality enables NODEs to learn the requisite low-D dynamics where RNN cells fail. Additionally, the fact that the NODE predicts derivatives imposes a useful autoregressive prior on the latent states. The suboptimal interpretability of widely-used RNN based dynamics may motivate substitution for alternative architectures, such as NODE, that enable learning of accurate dynamics in low-dimensional latent spaces.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135676451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probabilistic representations as building blocks for higher-level vision 概率表示作为高级视觉的构建块
Neurons, behavior, data analysis and theory Pub Date : 2023-01-31 DOI: 10.51628/001c.55730
Andrey Chetverikov, Arni Kristjansson
{"title":"Probabilistic representations as building blocks for higher-level vision","authors":"Andrey Chetverikov, Arni Kristjansson","doi":"10.51628/001c.55730","DOIUrl":"https://doi.org/10.51628/001c.55730","url":null,"abstract":"Current theories of perception suggest that the brain represents features of the world as probability distributions, but can such uncertain foundations provide the basis for everyday vision? Perceiving objects and scenes requires knowing not just how features (e.g., colors) are distributed but also where they are and which other features they are combined with. Using a Bayesian computational model, we recovered probabilistic representations used by human observers to search for odd stimuli among distractors. Importantly, we found that the brain integrates information between feature dimensions and spatial locations, leading to more precise representations compared to when information integration is not possible. We also uncovered representational asymmetries and biases, showing their spatial organization and explain how this structure argues against “summary statistics” accounts of visual representations. Our results confirm that probabilistically encoded visual features are bound with other features and to particular locations, providing a powerful demonstration of how probabilistic representations can be a foundation for higher-level vision.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135256460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Direct Discriminative Decoders for High-dimensional Time-series Data Analysis 用于高维时间序列数据分析的深度直接判别解码器
Neurons, behavior, data analysis and theory Pub Date : 2022-05-22 DOI: 10.51628/001c.85131
Mohammadreza Rezaei, Milos Popovic, M. Lankarany, A. Yousefi
{"title":"Deep Direct Discriminative Decoders for High-dimensional Time-series Data Analysis","authors":"Mohammadreza Rezaei, Milos Popovic, M. Lankarany, A. Yousefi","doi":"10.51628/001c.85131","DOIUrl":"https://doi.org/10.51628/001c.85131","url":null,"abstract":"The state-space models (SSMs) are widely utilized in the analysis of time-series data. SSMs rely on an explicit definition of the state and observation processes. Characterizing these processes is not always easy and becomes a modeling challenge when the dimension of observed data grows or the observed data distribution deviates from the normal distribution. Here, we propose a new formulation of SSM for high-dimensional observation processes with a heavy-tailed distribution. We call this solution the deep direct discriminative process (D4). The D4 brings deep neural networks’ expressiveness and scalability to the SSM formulation letting us build a novel solution that efficiently estimates the underlying state processes through high-dimensional observation signal.We demonstrate the D4 solutions in simulated and real data such as Lorenz attractors, Langevin dynamics, random walk dynamics, and rat hippocampus spiking neural data and show that the D4’s performance precedes traditional SSMs and RNNs. The D4 can be applied to a broader class of time-series data where the connection between high-dimensional observation and the underlying latent process is hard to characterize.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"3 2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90498947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frontal effective connectivity increases with task demands and time on task: a Dynamic Causal Model of electrocorticogram in macaque monkeys 额叶有效连通性随任务需求和任务时间的增加而增加:猕猴脑皮质电图的动态因果模型
Neurons, behavior, data analysis and theory Pub Date : 2022-02-21 DOI: 10.51628/001c.68433
K. Wegner, C. R. Wilson, E. Procyk, K. Friston, Frederik Van de Steen, D. Pinotsis, Daniele Marinazzo
{"title":"Frontal effective connectivity increases with task demands and time on task: a Dynamic Causal Model of electrocorticogram in macaque monkeys","authors":"K. Wegner, C. R. Wilson, E. Procyk, K. Friston, Frederik Van de Steen, D. Pinotsis, Daniele Marinazzo","doi":"10.51628/001c.68433","DOIUrl":"https://doi.org/10.51628/001c.68433","url":null,"abstract":"We apply Dynamic Causal Models to electrocorticogram recordings from two macaque monkeys performing a problem-solving task that engages working memory, and induces time-on-task effects. We thus provide a computational account of changes in effective connectivity within two regions of the fronto-parietal network, the dorsolateral prefrontal cortex and the pre-supplementary motor area. We find that forward connections between the two regions increased in strength when task demands increased, and as the experimental session progressed. Similarities in the effects of task demands and time on task allow us to interpret changes in frontal connectivity in terms of increased attentional effort allocation that compensates cognitive fatigue.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82241163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Golden rhythms as a theoretical framework for cross-frequency organization. 黄金节奏作为跨频组织的理论框架。
Neurons, behavior, data analysis and theory Pub Date : 2022-01-01 DOI: 10.51628/001c.38960
Mark A Kramer
{"title":"Golden rhythms as a theoretical framework for cross-frequency organization.","authors":"Mark A Kramer","doi":"10.51628/001c.38960","DOIUrl":"https://doi.org/10.51628/001c.38960","url":null,"abstract":"<p><p>While brain rhythms appear fundamental to brain function, why brain rhythms consistently organize into the small set of discrete frequency bands observed remains unknown. Here we propose that rhythms separated by factors of the golden ratio <math><mrow><mrow><mrow><mo>(</mo><mi>ϕ</mi><mo>=</mo><mo>(</mo><mn>1</mn><mo>+</mo><msqrt><mn>5</mn></msqrt><mo>)</mo><mo>/</mo><mn>2</mn><mo>)</mo></mrow><mo>)</mo></mrow></mrow></math> optimally support segregation and cross-frequency integration of information transmission in the brain. Organized by the golden ratio, pairs of transient rhythms support multiplexing by reducing interference between separate communication channels, and triplets of transient rhythms support integration of signals to establish a hierarchy of cross-frequency interactions. We illustrate this framework in simulation and apply this framework to propose four hypotheses.</p>","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10181851/pdf/nihms-1844698.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9529868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Explaining the effectiveness of fear extinction through latent-cause inference 通过潜在原因推理解释恐惧消除的有效性
Neurons, behavior, data analysis and theory Pub Date : 2021-10-04 DOI: 10.31234/osf.io/2fhr7
Mingyu Song, Carolyn E. Jones, M. Monfils, Y. Niv
{"title":"Explaining the effectiveness of fear extinction through latent-cause inference","authors":"Mingyu Song, Carolyn E. Jones, M. Monfils, Y. Niv","doi":"10.31234/osf.io/2fhr7","DOIUrl":"https://doi.org/10.31234/osf.io/2fhr7","url":null,"abstract":"Acquiring fear responses to predictors of aversive outcomes is crucial for survival. At the same time, it is important to be able to modify such associations when they are maladaptive, for instance in treating anxiety and trauma-related disorders. Standard extinction procedures can reduce fear temporarily, but with sufficient delay or with reminders of the aversive experience, fear often returns. The latent-cause inference framework explains the return of fear by presuming that animals learn a rich model of the environment, in which the standard extinction procedure triggers the inference of a new latent cause, preventing the extinguishing of the original aversive associations. This computational framework had previously inspired an alternative extinction paradigm -- gradual extinction -- which indeed was shown to be more effective in reducing fear. However, the original framework was not sufficient to explain the pattern of results seen in the experiments. Here, we propose a formal model to explain the effectiveness of gradual extinction, in contrast to the ineffectiveness of standard extinction and a gradual reverse control procedure. We demonstrate through quantitative simulation that our model can explain qualitative behavioral differences across different extinction procedures as seen in the empirical study. We verify the necessity of several key assumptions added to the latent-cause framework, which suggest potential general principles of animal learning and provide novel predictions for future experiments.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90833830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
How do we generalize? 我们如何概括?
Neurons, behavior, data analysis and theory Pub Date : 2021-08-30 DOI: 10.51628/001c.27687
Jessica Elizabeth Taylor, Aurelio Cortese, Helen C Barron, Xiaochuan Pan, Masamichi Sakagami, Dagmar Zeithamova
{"title":"How do we generalize?","authors":"Jessica Elizabeth Taylor, Aurelio Cortese, Helen C Barron, Xiaochuan Pan, Masamichi Sakagami, Dagmar Zeithamova","doi":"10.51628/001c.27687","DOIUrl":"10.51628/001c.27687","url":null,"abstract":"<p><p>Humans and animals are able to generalize or transfer information from previous experience so that they can behave appropriately in novel situations. What mechanisms-computations, representations, and neural systems-give rise to this remarkable ability? The members of this Generative Adversarial Collaboration (GAC) come from a range of academic backgrounds but are all interested in uncovering the mechanisms of generalization. We started out this GAC with the aim of arbitrating between two alternative conceptual accounts: (1) generalization stems from integration of multiple experiences into summary representations that reflect generalized knowledge, and (2) generalization is computed on-the-fly using separately stored individual memories. Across the course of this collaboration, we found that-despite using different terminology and techniques, and although some of our specific papers may provide evidence one way or the other-we in fact largely agree that both of these broad accounts (as well as several others) are likely valid. We believe that future research and theoretical synthesis across multiple lines of research is necessary to help determine the degree to which different candidate generalization mechanisms may operate simultaneously, operate on different scales, or be employed under distinct conditions. Here, as the first step, we introduce some of these candidate mechanisms and we discuss the issues currently hindering better synthesis of generalization research. Finally, we introduce some of our own research questions that have arisen over the course of this GAC, that we believe would benefit from future collaborative efforts.</p>","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7613724/pdf/EMS144088.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40680586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A roadmap to reverse engineering real-world generalization by combining naturalistic paradigms, deep sampling, and predictive computational models 通过结合自然主义范例、深度采样和预测计算模型,实现逆向工程现实世界泛化的路线图
Neurons, behavior, data analysis and theory Pub Date : 2021-08-23 DOI: 10.51628/001c.67879
P. Herholz, Eddy Fortier, Mariya Toneva, Nicolas Farrugia, Leila Wehbe, V. Borghesani
{"title":"A roadmap to reverse engineering real-world generalization by combining naturalistic paradigms, deep sampling, and predictive computational models","authors":"P. Herholz, Eddy Fortier, Mariya Toneva, Nicolas Farrugia, Leila Wehbe, V. Borghesani","doi":"10.51628/001c.67879","DOIUrl":"https://doi.org/10.51628/001c.67879","url":null,"abstract":"Real-world generalization, e.g., deciding to approach a never-seen-before animal, relies on contextual information as well as previous experiences. Such a seemingly easy behavioral choice requires the interplay of multiple neural mechanisms, from integrative encoding to category-based inference, weighted differently according to the circumstances. Here, we argue that a comprehensive theory of the neuro-cognitive substrates of real-world generalization will greatly benefit from empirical research with three key elements. First, the ecological validity provided by multimodal, naturalistic paradigms. Second, the model stability afforded by deep sampling. Finally, the statistical rigor granted by predictive modeling and computational controls.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91048635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating smooth and sparse neural receptive fields with a flexible spline basis 基于柔性样条基的平滑稀疏神经感受野估计
Neurons, behavior, data analysis and theory Pub Date : 2021-08-18 DOI: 10.51628/001c.27578
Ziwei Huang, Yanli Ran, Jonathan Oesterle, Thomas Euler, Philipp Berens
{"title":"Estimating smooth and sparse neural receptive fields with a flexible spline basis","authors":"Ziwei Huang, Yanli Ran, Jonathan Oesterle, Thomas Euler, Philipp Berens","doi":"10.51628/001c.27578","DOIUrl":"https://doi.org/10.51628/001c.27578","url":null,"abstract":"Spatio-temporal receptive field (STRF) models are frequently used to approximate the computation implemented by a sensory neuron. Typically, such STRFs are assumed to be smooth and sparse. Current state-of-the-art approaches for estimating STRFs based empirical Bayes estimation encode such prior knowledge into a prior covariance matrix, whose hyperparameters are learned from the data, and thus provide STRF estimates with the desired properties even with little or noisy data. However, empirical Bayes methods are often not computationally efficient in high-dimensional settings, as encountered in sensory neuroscience. Here we pursued an alternative approach and encode prior knowledge for estimation of STRFs by choosing a set of basis function with the desired properties: a natural cubic spline basis. Our method is computationally efficient, and can be easily applied to Linear-Gaussian and Linear-Nonlinear-Poisson models as well as more complicated Linear-Nonlinear-Linear-Nonlinear cascade model or spike-triggered clustering methods. We compared the performance of spline-based methods to no-spline ones on simulated and experimental data, showing that spline-based methods consistently outperformed the no-spline versions. We provide a Python toolbox for all suggested methods (https://github.com/berenslab/RFEst/).","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80946148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信