{"title":"基于自监督信号提取的潜在因子分析改进","authors":"Y. Huang, Zhuliang Yu","doi":"10.1145/3523286.3524586","DOIUrl":null,"url":null,"abstract":"The computational neuroscience community has found that neural population activities have stable low-dimensional structures. Latent variable models based on Statistical machine learning and deep neural networks have revealed the informative low-dimensional representations with promising performance and efficiency. To address the issue of identifiability and interpretability due to the noise in the neural spike trains, recently there has been a focus on drawing progress from representation learning to better capture the universality and variability of the neural spikes. However, an important but less studied solution for the issue is signal denoising, which may be simpler and more practical. In this work, we introduce a simple yet effective improvement that extracts the informative signal from the noisy neural data by decomposing the latent space into one part relevant to the underlying neural patterns and one part irrelevant to it. We train our model in a self-supervised learning manner. We show that our model consistently improves the performance of the baseline model on a motor task dataset.","PeriodicalId":268165,"journal":{"name":"2022 2nd International Conference on Bioinformatics and Intelligent Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving Latent Factor Analysis via Self-supervised Signal Extracting\",\"authors\":\"Y. Huang, Zhuliang Yu\",\"doi\":\"10.1145/3523286.3524586\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The computational neuroscience community has found that neural population activities have stable low-dimensional structures. Latent variable models based on Statistical machine learning and deep neural networks have revealed the informative low-dimensional representations with promising performance and efficiency. To address the issue of identifiability and interpretability due to the noise in the neural spike trains, recently there has been a focus on drawing progress from representation learning to better capture the universality and variability of the neural spikes. However, an important but less studied solution for the issue is signal denoising, which may be simpler and more practical. In this work, we introduce a simple yet effective improvement that extracts the informative signal from the noisy neural data by decomposing the latent space into one part relevant to the underlying neural patterns and one part irrelevant to it. We train our model in a self-supervised learning manner. We show that our model consistently improves the performance of the baseline model on a motor task dataset.\",\"PeriodicalId\":268165,\"journal\":{\"name\":\"2022 2nd International Conference on Bioinformatics and Intelligent Computing\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 2nd International Conference on Bioinformatics and Intelligent Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3523286.3524586\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Bioinformatics and Intelligent Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3523286.3524586","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improving Latent Factor Analysis via Self-supervised Signal Extracting
The computational neuroscience community has found that neural population activities have stable low-dimensional structures. Latent variable models based on Statistical machine learning and deep neural networks have revealed the informative low-dimensional representations with promising performance and efficiency. To address the issue of identifiability and interpretability due to the noise in the neural spike trains, recently there has been a focus on drawing progress from representation learning to better capture the universality and variability of the neural spikes. However, an important but less studied solution for the issue is signal denoising, which may be simpler and more practical. In this work, we introduce a simple yet effective improvement that extracts the informative signal from the noisy neural data by decomposing the latent space into one part relevant to the underlying neural patterns and one part irrelevant to it. We train our model in a self-supervised learning manner. We show that our model consistently improves the performance of the baseline model on a motor task dataset.