Yuzhen Qin, Tommaso Menara, D. Bassett, F. Pasqualetti
{"title":"Phase-amplitude coupling in neuronal oscillator networks","authors":"Yuzhen Qin, Tommaso Menara, D. Bassett, F. Pasqualetti","doi":"10.1103/PhysRevResearch.3.023218","DOIUrl":"https://doi.org/10.1103/PhysRevResearch.3.023218","url":null,"abstract":"Phase-amplitude coupling (PAC) describes the phenomenon where the power of a high-frequency oscillation evolves with the phase of a low-frequency one. We propose a model that explains the emergence of PAC in two commonly-accepted architectures in the brain, namely, a high-frequency neural oscillation driven by an external low-frequency input and two interacting local oscillations with distinct, locally-generated frequencies. We further propose an interconnection structure for brain regions and demonstrate that low-frequency phase synchrony can integrate high-frequency activities regulated by local PAC and control the direction of information flow across distant regions.","PeriodicalId":298664,"journal":{"name":"arXiv: Neurons and Cognition","volume":"499 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116194505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lee Susman, F. Mastrogiuseppe, N. Brenner, O. Barak
{"title":"Quality of internal representation shapes learning performance in feedback neural networks","authors":"Lee Susman, F. Mastrogiuseppe, N. Brenner, O. Barak","doi":"10.1103/PHYSREVRESEARCH.3.013176","DOIUrl":"https://doi.org/10.1103/PHYSREVRESEARCH.3.013176","url":null,"abstract":"A fundamental feature of complex biological systems is the ability to form feedback interactions with their environment. A prominent model for studying such interactions is reservoir computing, where learning acts on low-dimensional bottlenecks. Despite the simplicity of this learning scheme, the factors contributing to or hindering the success of training in reservoir networks are in general not well understood. In this work, we study non-linear feedback networks trained to generate a sinusoidal signal, and analyze how learning performance is shaped by the interplay between internal network dynamics and target properties. By performing exact mathematical analysis of linearized networks, we predict that learning performance is maximized when the target is characterized by an optimal, intermediate frequency which monotonically decreases with the strength of the internal reservoir connectivity. At the optimal frequency, the reservoir representation of the target signal is high-dimensional, de-synchronized, and thus maximally robust to noise. We show that our predictions successfully capture the qualitative behaviour of performance in non-linear networks. Moreover, we find that the relationship between internal representations and performance can be further exploited in trained non-linear networks to explain behaviours which do not have a linear counterpart. Our results indicate that a major determinant of learning success is the quality of the internal representation of the target, which in turn is shaped by an interplay between parameters controlling the internal network and those defining the task.","PeriodicalId":298664,"journal":{"name":"arXiv: Neurons and Cognition","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128372168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generalisation of neuronal excitability allows for the identification of an excitability change parameter that links to an experimentally measurable value","authors":"J. Broek, Guillaume Drion","doi":"10.5281/zenodo.4159691","DOIUrl":"https://doi.org/10.5281/zenodo.4159691","url":null,"abstract":"Neuronal excitability is the phenomena that describes action potential generation due to a stimulus input. Commonly, neuronal excitability is divided into two classes: Type I and Type II, both having different properties that affect information processing, such as thresholding and gain scaling. These properties can be mathematically studied using generalised phenomenological models, such as the Fitzhugh-Nagumo model and the mirrored FHN. The FHN model shows that each excitability type corresponds to one specific type of bifurcation in the phase plane: Type I underlies a saddle-node on invariant cycle bifurcation, and Type II a Hopf bifurcation. The difficulty of modelling Type I excitability is that it is not only represented by its underlying bifurcation, but also should be able to generate frequency while maintaining a small depolarising current. Using the mFHN model, we show that this situation is possible without modifying the phase portrait, due to the incorporation of a slow regenerative variable. We show that in the singular limit of the mFHN model, the time-scale separation can be chosen such that there is a configuration of a classical phase portrait that allows for SNIC bifurcation, zero-frequency onset and a depolarising current, such as observed in Type I excitability. Using the definition of slow conductance, g_s, we show that these mathematical findings for excitability change are translatable to reduced conductance based models and also relates to an experimentally measurable quantity. This not only allows for a measure of excitability change, but also relates the mathematical parameters that indicate a physiological Type I excitability to parameters that can be tuned during experiments.","PeriodicalId":298664,"journal":{"name":"arXiv: Neurons and Cognition","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127711076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Short term memory by transient oscillatory dynamics in recurrent neural networks","authors":"K. Ichikawa, K. Kaneko","doi":"10.1103/PhysRevResearch.3.033193","DOIUrl":"https://doi.org/10.1103/PhysRevResearch.3.033193","url":null,"abstract":"Despite the importance of short-term memory in cognitive function, how the input information is encoded and sustained in neural activity dynamics remains elusive. Here, by training recurrent neural networks to short-term memory tasks and analyzing the dynamics, the characteristic of the short-term memory mechanism was obtained in which the input information was encoded in the amplitude of transient oscillation, rather than the stationary neural activities. This transient orbit was attracted to a slow manifold, which allowed for the discarding of irrelevant information. Strong contraction to the manifold results in the noise robustness of the transient orbit, accordingly to the memory. The generality of the result and its relevance to neural information processing were discussed.","PeriodicalId":298664,"journal":{"name":"arXiv: Neurons and Cognition","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133966073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessandro Sarracino, O. Arviv, O. Shriki, L. Arcangelis
{"title":"Predicting brain evoked response to external stimuli from temporal correlations of spontaneous activity","authors":"Alessandro Sarracino, O. Arviv, O. Shriki, L. Arcangelis","doi":"10.1103/PhysRevResearch.2.033355","DOIUrl":"https://doi.org/10.1103/PhysRevResearch.2.033355","url":null,"abstract":"The relation between spontaneous and stimulated global brain activity is a fundamental problem in the understanding of brain functions. This question is investigated both theoretically and experimentally within the context of nonequilibrium fluctuation-dissipation relations. We consider the stochastic coarse-grained Wilson-Cowan model in the linear noise approximation and compare analytical results to experimental data from magnetoencephalography (MEG) of human brain. The short time behavior of the autocorrelation function for spontaneous activity is characterized by a double-exponential decay, with two characteristic times, differing by two orders of magnitude. Conversely, the response function exhibits a single exponential decay in agreement with experimental data for evoked activity under visual stimulation. Results suggest that the brain response to weak external stimuli can be predicted from the observation of spontaneous activity and pave the way to controlled experiments on the brain response under different external perturbations.","PeriodicalId":298664,"journal":{"name":"arXiv: Neurons and Cognition","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127321775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Memory systems of the brain","authors":"Alvaro Pastor","doi":"10.31219/OSF.IO/W6KN9","DOIUrl":"https://doi.org/10.31219/OSF.IO/W6KN9","url":null,"abstract":"Humans have long been fascinated by how memories are formed, how they can be damaged or lost, or still seem vibrant after many years. Thus the search for the locus and organization of memory has had a long history, in which the notion that is is composed of distinct systems developed during the second half of the 20th century.A fundamental dichotomy between conscious and unconscious memory processes was first drawn based on evidences from the study of amnesiac subjects and the systematic experimental work with animals. The use of behavioral and neural measures together with imaging techniques have progressively led researchers to agree in the existence of a variety of neural architectures that support multiple memory systems.This article presents a historical lens with which to contextualize these idea on memory systems, and provides a current account for the multiple memory systems model.","PeriodicalId":298664,"journal":{"name":"arXiv: Neurons and Cognition","volume":"591 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121979184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feedback Gains modulate with Motor Memory Uncertainty","authors":"Sae Franklin, D. W. Franklin","doi":"10.51628/001C.22336","DOIUrl":"https://doi.org/10.51628/001C.22336","url":null,"abstract":"A sudden change in dynamics produces large errors leading to increases in muscle co-contraction and feedback gains during early adaptation. We previously proposed that internal model uncertainty drives these changes, whereby the sensorimotor system reacts to the change in dynamics by upregulating stiffness and feedback gains to reduce the effect of model errors. However, these feedback gain increases have also been suggested to represent part of the adaptation mechanism. Here, we investigate this by examining changes in visuomotor feedback gains during gradual or abrupt force field adaptation. Participants grasped a robotic manipulandum and reached while a curl force field was introduced gradually or abruptly. Abrupt introduction of dynamics elicited large initial increases in kinematic error, muscle co-contraction and visuomotor feedback gains, while gradual introduction showed little initial change in these measures despite evidence of adaptation. After adaptation had plateaued, there was a change in the co-contraction and visuomotor feedback gains relative to null field movements, but no differences (apart from the final muscle activation pattern) between the abrupt and gradual introduction of dynamics. This suggests that the initial increase in feedback gains is not part of the adaptation process, but instead an automatic reactive response to internal model uncertainty. In contrast, the final level of feedback gains is a predictive tuning of the feedback gains to the external dynamics as part of the internal model adaptation. Together, the reactive and predictive feedback gains explain the wide variety of previous experimental results of feedback changes during adaptation.","PeriodicalId":298664,"journal":{"name":"arXiv: Neurons and Cognition","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115752676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tiger Cross, Rasika Navarange, Joon-ho Son, William Burr, Arjun Singh, Kelvin Zhang, M. Rusu, Konstantinos Gkoutzis, A. Osborne, Bart Nieuwenhuis Department of Computing, I. -. London, John van Geest Centre for Brain Repair, Department of Clinical Neurosciences, U. Cambridge, L. Systems, Netherlands Institute for Neuroscience, R. Arts, Sciences
{"title":"Simple RGC: ImageJ Plugins for Counting Retinal Ganglion Cells and Determining the Transduction Efficiency of Viral Vectors in Retinal Wholemounts","authors":"Tiger Cross, Rasika Navarange, Joon-ho Son, William Burr, Arjun Singh, Kelvin Zhang, M. Rusu, Konstantinos Gkoutzis, A. Osborne, Bart Nieuwenhuis Department of Computing, I. -. London, John van Geest Centre for Brain Repair, Department of Clinical Neurosciences, U. Cambridge, L. Systems, Netherlands Institute for Neuroscience, R. Arts, Sciences","doi":"10.5334/jors.342","DOIUrl":"https://doi.org/10.5334/jors.342","url":null,"abstract":"Simple RGC consists of a collection of ImageJ plugins to assist researchers investigating retinal ganglion cell (RGC) injury models in addition to helping assess the effectiveness of treatments. The first plugin named RGC Counter accurately calculates the total number of RGCs from retinal wholemount images. The second plugin named RGC Transduction measures the co-localisation between two channels making it possible to determine the transduction efficiencies of viral vectors and transgene expression levels. The third plugin named RGC Batch is a batch image processor to deliver fast analysis of large groups of microscope images. These ImageJ plugins make analysis of RGCs in retinal wholemounts easy, quick, consistent, and less prone to unconscious bias by the investigator. The plugins are freely available from the ImageJ update site this https URL.","PeriodicalId":298664,"journal":{"name":"arXiv: Neurons and Cognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129912021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Poincaré Return Maps in Neural Dynamics: Three Examples","authors":"M. Kolomiets, A. Shilnikov","doi":"10.1007/978-3-030-60107-2_3","DOIUrl":"https://doi.org/10.1007/978-3-030-60107-2_3","url":null,"abstract":"","PeriodicalId":298664,"journal":{"name":"arXiv: Neurons and Cognition","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127654430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Budzinski, K. L. Rossi, B. Boaretto, T. L. Prado, S. R. Lopes
{"title":"Synchronization malleability in neural networks under a distance-dependent coupling","authors":"R. Budzinski, K. L. Rossi, B. Boaretto, T. L. Prado, S. R. Lopes","doi":"10.1103/physrevresearch.2.043309","DOIUrl":"https://doi.org/10.1103/physrevresearch.2.043309","url":null,"abstract":"We investigate the synchronization features of a network of spiking neurons under a distance-dependent coupling following a power-law model. The interplay between topology and coupling strength leads to the existence of different spatiotemporal patterns, corresponding to either non-synchronized or phase-synchronized states. Particularly interesting is what we call synchronization malleability, in which the system depicts significantly different phase synchronization degrees for the same parameters as a consequence of a different ordering of neural inputs. We analyze the functional connectivity of the network by calculating the mutual information between neuronal spike trains, allowing us to characterize the structures of synchronization in the network. We show that these structures are dependent on the ordering of the inputs for the parameter regions where the network presents synchronization malleability and we suggest that this is due to a balance between local and global effects.","PeriodicalId":298664,"journal":{"name":"arXiv: Neurons and Cognition","volume":"1997 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128212714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}