E. Arani, Sofia Triantafillou, Konrad Paul Kording
{"title":"Reverse engineering neural networks from many partial recordings","authors":"E. Arani, Sofia Triantafillou, Konrad Paul Kording","doi":"10.32470/CCN.2018.1037-0","DOIUrl":null,"url":null,"abstract":"Much of neuroscience aims at reverse engineering the brain, but we only record a small number of neurons at a time. We do not currently know if reverse engineering the brain requires us to simultaneously record most neurons or if multiple recordings from smaller subsets suffice. This is made even more important by the development of novel techniques that allow recording from selected subsets of neurons, e.g. using optical techniques. To get at this question, we analyze a neural network, trained on the MNIST dataset, using only partial recordings and characterize the dependency of the quality of our reverse engineering on the number of simultaneously recorded \"neurons\". We find that reverse engineering of the nonlinear neural network is meaningfully possible if a sufficiently large number of neurons is simultaneously recorded but that this number can be considerably smaller than the number of neurons. Moreover, recording many times from small random subsets of neurons yields surprisingly good performance. Application in neuroscience suggests to approximate the I/O function of an actual neural system, we need to record from a much larger number of neurons. The kind of scaling analysis we perform here can, and arguably should be used to calibrate approaches that can dramatically scale up the size of recorded data sets in neuroscience.","PeriodicalId":298664,"journal":{"name":"arXiv: Neurons and Cognition","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv: Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32470/CCN.2018.1037-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Much of neuroscience aims at reverse engineering the brain, but we only record a small number of neurons at a time. We do not currently know if reverse engineering the brain requires us to simultaneously record most neurons or if multiple recordings from smaller subsets suffice. This is made even more important by the development of novel techniques that allow recording from selected subsets of neurons, e.g. using optical techniques. To get at this question, we analyze a neural network, trained on the MNIST dataset, using only partial recordings and characterize the dependency of the quality of our reverse engineering on the number of simultaneously recorded "neurons". We find that reverse engineering of the nonlinear neural network is meaningfully possible if a sufficiently large number of neurons is simultaneously recorded but that this number can be considerably smaller than the number of neurons. Moreover, recording many times from small random subsets of neurons yields surprisingly good performance. Application in neuroscience suggests to approximate the I/O function of an actual neural system, we need to record from a much larger number of neurons. The kind of scaling analysis we perform here can, and arguably should be used to calibrate approaches that can dramatically scale up the size of recorded data sets in neuroscience.