{"title":"Narratives as Networks: Predicting Memory from the Structure of Naturalistic Events","authors":"Hongmi Lee, Janice Chen","doi":"10.32470/ccn.2019.1170-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1170-0","url":null,"abstract":"Human life consists of a multitude of diverse and interconnected events. However, extant research has focused on how humans segment and remember discrete events from continuous input, with far less attention given to how the structure of connections between events impacts memory. We conducted an fMRI study in which subjects watched and recalled a series of realistic audiovisual narratives. By transforming narratives into networks of events, we found that more central events—those with stronger semantic or causal connections to other events—were better remembered. During encoding, central events evoked larger hippocampal event boundary responses associated with memory consolidation. During recall, high centrality predicted stronger activation in cortical areas involved in episodic recollection, and more similar neural representations across individuals. Together, these results suggest that when humans encode and retrieve complex real-world experiences, the reliability and accessibility of memory representations is shaped by their location within a network of events.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122013747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeff Mitchell, N. Kazanina, Conor J. Houghton, J. Bowers
{"title":"Do LSTMs know about Principle C?","authors":"Jeff Mitchell, N. Kazanina, Conor J. Houghton, J. Bowers","doi":"10.32470/ccn.2019.1241-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1241-0","url":null,"abstract":"We investigate whether a recurrent network trained on raw text can learn an important syntactic constraint on coreference. A Long Short-Term Memory (LSTM) network that is sensitive to some other syntactic constraints was tested on psycholinguistic materials from two published experiments on coreference. Whereas the participants were sensitive to the Principle C constraint on coreference the LSTM network was not. Our results suggest that, whether as cognitive models of linguistic processes or as engineering solutions in practical applications, recurrent networks may need to be augmented with additional inductive biases to be able to learn models and representations that fully capture the structures of language underlying comprehension.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"39 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125734275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. L. Montero, Gaurav Malhotra, J. Bowers, R. P. Costa
{"title":"Subtractive gating improves generalization in working memory tasks","authors":"M. L. Montero, Gaurav Malhotra, J. Bowers, R. P. Costa","doi":"10.32470/ccn.2019.1352-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1352-0","url":null,"abstract":"It is largely unclear how the brain learns to generalize to new situations. Although deep learning models offer great promise as potential models of the brain, they break down when tested on novel conditions not present in their training datasets. One of the most successful models in machine learning are gated-recurrent neural networks. Because of its working memory properties here we refer to these networks as working memory networks (WMN). We compare WMNs with a biologically motivated variant of these networks. In contrast to the multiplicative gating used by WMNs, this new variant operates via subtracting gating (subWMN). We tested these two models in a range of working memory tasks: orientation recall with distractors, orientation recall with update/addition and distractors, and a more challenging task: sequence recognition based on the machine learning handwritten digits dataset. We evaluated the generalization properties of these two networks in working memory tasks by measuring how well they copped with three working memory loads: memory maintenance over time, making memories distractor-resistant and memory updating. Across these tests subWMNs perform better and more robustly than WMNs. These results suggests that the brain may rely on subtractive gating for improved generalization in working memory tasks.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114365889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adversarial Training of Neural Encoding Models on Population Spike Trains","authors":"Poornima Ramesh, Mohamad Atayi, J. Macke","doi":"10.32470/ccn.2019.1263-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1263-0","url":null,"abstract":"Neural population responses to sensory stimuli can exhibit both nonlinear stimulusdependence and richly structured shared variability. Here, we show how adversarial training can be used to optimize neural encoding models to capture both the deterministic and stochastic components of neural population data. To account for the discrete nature of neural spike trains, we use and compare gradient estimators for adversarial optimization of neural encoding models. We illustrate our approach on population recordings from primary visual cortex. We show that adding latent noise-sources to a convolutional neural network yields a model which captures both the stimulus-dependence and noise correlations of the population activity.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128381924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yinan Cao, Hame Park, Bruno L. Giordano, C. Kayser, C. Spence, C. Summerfield
{"title":"Unfolding of multisensory inference in the brain and behavior","authors":"Yinan Cao, Hame Park, Bruno L. Giordano, C. Kayser, C. Spence, C. Summerfield","doi":"10.32470/ccn.2019.1219-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1219-0","url":null,"abstract":"Yinan Cao (yinan.cao@psy.ox.ac.uk) University of Oxford, Walton Street, Oxford OX2 6AE, United Kingdom Hame Park Bielefeld University, 33615 Bielefeld, Germany Bruno L. Giordano* Centre National de la Recherche Scientifique and Aix-Marseille Université, Marseille, France Christoph Kayser* Bielefeld University, 33615 Bielefeld, Germany Charles Spence* University of Oxford, Walton Street, Oxford OX2 6GG, United Kingdom Christopher Summerfield* University of Oxford, Walton Street, Oxford OX2 6AE, United Kingdom [* Equal contributions]","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123406805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. R. Yang, Peter Y. Wang, Yi Sun, Ashok Litwin-Kumar, R. Axel, L. Abbott
{"title":"Evolving the Olfactory System","authors":"G. R. Yang, Peter Y. Wang, Yi Sun, Ashok Litwin-Kumar, R. Axel, L. Abbott","doi":"10.32470/ccn.2019.1355-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1355-0","url":null,"abstract":"Flies and mice are species separated by 600 million years of evolution, yet have evolved olfactory systems that share many similarities in their anatomic and functional organization. What functions do these shared anatomical and functional features serve, and are they optimal for odor sensing? In this study, we address the optimality of evolutionary design in olfactory circuits by studying artificial neural networks trained to sense odors. We found that artificial neural networks quantitatively recapitulate structures inherent in the olfactory system, including the formation of glomeruli onto a compression layer and sparse and random connectivity onto an expansion layer. Finally, we offer theoretical justifications for each result. Our work offers a framework to explain the evolutionary convergence of olfactory circuits, and gives insight and logic into the anatomic and functional structure of the olfactory system.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130195673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Jain, Sanit Gupta, V. Rakesh, P. Dayan, Frederick Callaway, Falk Lieder
{"title":"How do people learn how to plan?","authors":"Y. Jain, Sanit Gupta, V. Rakesh, P. Dayan, Frederick Callaway, Falk Lieder","doi":"10.32470/ccn.2019.1313-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1313-0","url":null,"abstract":"How does the brain learn how to plan? We reverseengineer people’s underlying learning mechanisms by combining rational process models of cognitive plasticity with recently developed empirical methods that allow us to trace the temporal evolution of people’s planning strategies. We find that our Learned Value of Computation model (LVOC) accurately captures people’s average learning curve. However, there were also substantial individual differences in metacognitive learning that are best understood in terms of multiple different learning mechanisms – including strategy selection learning. Furthermore, we observed that LVOC could not fully capture people’s ability to adaptively decide when to stop planning. We successfully extended the LVOC model to address these discrepancies. Our models broadly capture people’s ability to improve their decision mechanisms and represent a significant step towards reverseengineering how the brain learns increasingly effective cognitive strategies through its interaction with the environment.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122133132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria K. Eckstein, Sarah L. Master, R. Dahl, L. Wilbrecht, A. Collins
{"title":"Modeling the development of decision making in volatile environments using strategies, reinforcement learning, and Bayesian inference","authors":"Maria K. Eckstein, Sarah L. Master, R. Dahl, L. Wilbrecht, A. Collins","doi":"10.32470/ccn.2019.1409-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1409-0","url":null,"abstract":"Continuously adjusting behavior in changing environments is a crucial skill for intelligent creatures, but we know little about how this ability develops in humans. Here, we investigate this question in a large sample using behavioral analyses and computational modeling. We assessed over 200 participants (ages 8-30) on a probabilistic, volatile reinforcement learning task, and measured pubertal development status and salivary testosterone. We used three classes of models to analyze behavior on the task: fixed strategies, incremental reinforcement learning, and Bayesian inference. All model classes provided converging evidence for a decrease in decision noise or exploration with age. Individual models also provided insight into unique aspects of decision making, such as changes in estimated reward probabilities, and sed-specific changes in the sensitivity to positive versus negative outcomes. Our results show that the combination of models can provide detailed insight into the development of decision making, and into complex cognition more generally.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132299871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Schmitt, J. Erb, Sarah Tune, A. Rysop, G. Hartwigsen, J. Obleser
{"title":"Reading times and temporo-parietal BOLD activity encode the semantic hierarchy of language prediction","authors":"L. Schmitt, J. Erb, Sarah Tune, A. Rysop, G. Hartwigsen, J. Obleser","doi":"10.32470/ccn.2019.1333-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1333-0","url":null,"abstract":"When poor acoustics challenge speech comprehension, listeners are thought to increasingly draw on semantic context to predict upcoming speech. However, previous research focused mostly on speech material with short timescales of context (e.g., isolated sentences). In an fMRI experiment, 30 participants listened to a one-hour narrative incorporating a multitude of timescales while confronted with competing resynthesized natural sounds. We modeled semantic predictability at five timescales of increasing context length by computing the similarity between word embeddings. An encoding model revealed that short informative timescales are coupled to increased activity in the posterior portion of superior temporal gyrus, whereas long informative timescales are coupled to increased activity in parietal regions like the angular gyrus. In a second experiment, we probed the behavioral relevance of semantic timescales in language prediction: 11 participants performed a self-paced reading task on a text version of the narrative. Reading times sped up for the shortest informative timescale, but also tended to speed up for the longest informative timescales. Our results suggest that short-term dependencies as well as the gist of a story drive behavioral processing fluency and engage a temporo-parietal processing hierarchy.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131728655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Micha Heilbron, Benedikt V. Ehinger, P. Hagoort, F. P. Lange
{"title":"Tracking Naturalistic Linguistic Predictions with Deep Neural Language Models","authors":"Micha Heilbron, Benedikt V. Ehinger, P. Hagoort, F. P. Lange","doi":"10.32470/CCN.2019.1096-0","DOIUrl":"https://doi.org/10.32470/CCN.2019.1096-0","url":null,"abstract":"Prediction in language has traditionally been studied using simple designs in which neural responses to expected and unexpected words are compared in a categorical fashion. However, these designs have been contested as being `prediction encouraging', potentially exaggerating the importance of prediction in language understanding. A few recent studies have begun to address these worries by using model-based approaches to probe the effects of linguistic predictability in naturalistic stimuli (e.g. continuous narrative). However, these studies so far only looked at very local forms of prediction, using models that take no more than the prior two words into account when computing a word's predictability. Here, we extend this approach using a state-of-the-art neural language model that can take roughly 500 times longer linguistic contexts into account. Predictability estimates from the neural network offer a much better fit to EEG data from subjects listening to naturalistic narrative than simpler models, and reveal strong surprise responses akin to the P200 and N400. These results show that predictability effects in language are not a side-effect of simple designs, and demonstrate the practical use of recent advances in AI for the cognitive neuroscience of language.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124481344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}