{"title":"潜在嵌入:大脑与环境相互作用的基本表征","authors":"Yaning Han, Xiaoting Hou, Chuanliang Han","doi":"10.1002/brx2.40","DOIUrl":null,"url":null,"abstract":"<p>The brain governs the behaviors of natural species (including humans and animals), which serves as a central hub integrating incoming sensory signals from the constantly changing environment. Recent cutting-edge technologies in neuroscience from behavioral<span><sup>1</sup></span> and neural levels<span><sup>2</sup></span> have enabled precise and comprehensive measurements. However, the environment–brain–behavior dataset is difficult to interpret because of its high-dimensional nature. To address this challenge, latent embedding has emerged as a promising technique with the property of dimensionality reduction, which can facilitate the identification of common environment–brain–behavior patterns (Figure 1).</p><p>The main idea of extracting latent embeddings is to eliminate dataset redundancy. It requires an algorithm to transform the raw dataset to a new low-dimensional feature space with little information loss. Classically, principal component analysis has been used to linearly transform raw data to an orthogonal space. However, owing to the existence of non-linear structures in nature, the linear transform cannot avoid high information loss in low dimensions. Thus, several non-linear dimensionality reduction methods (t-distributed stochastic neighbor embedding [t-SNE]<span><sup>4</sup></span> and uniform manifold approximation and projection for dimension reduction [UMAP]<span><sup>5</sup></span>) have been developed. However, their non-linear features can reduce the interpretability. For instance, the hippocampus is responsible for representing spatial information and the direction of travel, but pure data-driven latent embeddings (t-SNE or UMAP) may confuse these two functions. These two functions are executed simultaneously, which requires interpretable hypotheses to separate them. Pure data-driven methods cannot introduce existing assumptions to refine latent embeddings. However, using a recent neural network encoder (CEBRA),<span><sup>3</sup></span> this problem can be fully solved. CEBRA addresses this issue by incorporating both supervised and self-supervised learning approaches. By providing supervision through space or direction labels, CEBRA can identify distinct coding patterns in the neural activities of the hippocampus across different latent dimensions, ensuring dimensional alignment with interpretable prior knowledge.</p><p>The main process of CEBRA uses contrastive learning, which was developed to obtain low-dimensional embeddings that are both interpretable and exhibit high performance across various applications.<span><sup>3</sup></span> The contrastive learning technique aims to discover common and distinguishable attributes by contrasting samples, and it optimizes joint latent embeddings from multiple sources, including sensory inputs, brain activities, and behaviors. CEBRA's non-linear encoder combines input data from multiple modalities and uses auxiliary labels to enhance the interpretability. As a result, CEBRA can be applied to both static and dynamic variables, making it a versatile tool for analyzing environment–brain–behavior data. These characteristics enable CEBRA to identify valuable differences across multiple subjects and generate consistent latent embeddings that accurately represent the intrinsic and generalizable information flow across various types of data. This alignment enables CEBRA to accurately predict the locomotion of animals, identify active or passive behavior in primates, and represent stable neural patterns across different recording technologies, subjects, and species using its latent embeddings.</p><p>One amazing result of CEBRA is the reconstruction of videos from mouse visual cortical areas.<span><sup>3</sup></span> Neural activities during natural videos could be encoded in latent embeddings and then decoded with great accuracy. The video has millions of pixel dimensions with temporal dynamics, which have been compressed into neural representations. In this case, CEBRA can further compress them into only three latent dimensions, containing sufficient information to restore the raw videos. These findings demonstrate CEBRA's ability to identify common latent embeddings from visual input to brain activities. Furthermore, they suggest that the brain can compress and process external information in extremely low dimensions. Latent embeddings contain the intricate interactions between the world and the brain without any loss of information.</p><p>Although CEBRA demonstrates the state-of-the-art (SOTA) decoding accuracy of natural videos, the decoding occurs not in the frame contents but in the indexes. Recovering the visual inputs from neural activities is still an issue. One problem is the limitation of the number of recording neurons in current recording technologies, which lose considerable amounts of information. The direction but also the challenge is recording more neuronal activities and their physiological connections. This could reduce the error in estimating neural activity correlations. Another issue is the unclear analytic expression of latent embeddings. This dimness is primarily due to insufficient labeled variables, such as changing edges and instance deformations in videos. In the future, the advancement of neuroethological measurement technologies is essential to further enhance the performance of latent embeddings. Powerful neural networks, such as transformers, may be employed to process increasingly large datasets. An analytical expression is crucial to understand the intricacies of latent embeddings. CEBRA attains interpretability by utilizing low-order variables, such as position and velocity. However, to achieve high-order interpretability, more complex symbolic regression mechanisms are necessary.</p><p>Latent embeddings serve as a crucial intermediary for the transmission of information from the external world to the brain. However, the applications of latent embeddings are not only restricted to the brain. Under the background of clinical big data, latent embeddings have considerable potential for revealing the inner mechanism of disease related to the brain, genes, and other physiological indicators. The complex interactions between drugs and individuals could also be simplified in latent embeddings. Nevertheless, much research is needed to fully understand the underlying meaning of latent embeddings.</p><p><b>Yaning Han</b>, <b>Xiaoting Hou</b>, and <b>Chuanliang Han</b>: Conceptualization; writing – original draft; writing – review & editing.</p><p>The authors declare no competing interests.</p>","PeriodicalId":94303,"journal":{"name":"Brain-X","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/brx2.40","citationCount":"0","resultStr":"{\"title\":\"Latent embeddings: An essential representation of brain–environment interactions\",\"authors\":\"Yaning Han, Xiaoting Hou, Chuanliang Han\",\"doi\":\"10.1002/brx2.40\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The brain governs the behaviors of natural species (including humans and animals), which serves as a central hub integrating incoming sensory signals from the constantly changing environment. Recent cutting-edge technologies in neuroscience from behavioral<span><sup>1</sup></span> and neural levels<span><sup>2</sup></span> have enabled precise and comprehensive measurements. However, the environment–brain–behavior dataset is difficult to interpret because of its high-dimensional nature. To address this challenge, latent embedding has emerged as a promising technique with the property of dimensionality reduction, which can facilitate the identification of common environment–brain–behavior patterns (Figure 1).</p><p>The main idea of extracting latent embeddings is to eliminate dataset redundancy. It requires an algorithm to transform the raw dataset to a new low-dimensional feature space with little information loss. Classically, principal component analysis has been used to linearly transform raw data to an orthogonal space. However, owing to the existence of non-linear structures in nature, the linear transform cannot avoid high information loss in low dimensions. Thus, several non-linear dimensionality reduction methods (t-distributed stochastic neighbor embedding [t-SNE]<span><sup>4</sup></span> and uniform manifold approximation and projection for dimension reduction [UMAP]<span><sup>5</sup></span>) have been developed. However, their non-linear features can reduce the interpretability. For instance, the hippocampus is responsible for representing spatial information and the direction of travel, but pure data-driven latent embeddings (t-SNE or UMAP) may confuse these two functions. These two functions are executed simultaneously, which requires interpretable hypotheses to separate them. Pure data-driven methods cannot introduce existing assumptions to refine latent embeddings. However, using a recent neural network encoder (CEBRA),<span><sup>3</sup></span> this problem can be fully solved. CEBRA addresses this issue by incorporating both supervised and self-supervised learning approaches. By providing supervision through space or direction labels, CEBRA can identify distinct coding patterns in the neural activities of the hippocampus across different latent dimensions, ensuring dimensional alignment with interpretable prior knowledge.</p><p>The main process of CEBRA uses contrastive learning, which was developed to obtain low-dimensional embeddings that are both interpretable and exhibit high performance across various applications.<span><sup>3</sup></span> The contrastive learning technique aims to discover common and distinguishable attributes by contrasting samples, and it optimizes joint latent embeddings from multiple sources, including sensory inputs, brain activities, and behaviors. CEBRA's non-linear encoder combines input data from multiple modalities and uses auxiliary labels to enhance the interpretability. As a result, CEBRA can be applied to both static and dynamic variables, making it a versatile tool for analyzing environment–brain–behavior data. These characteristics enable CEBRA to identify valuable differences across multiple subjects and generate consistent latent embeddings that accurately represent the intrinsic and generalizable information flow across various types of data. This alignment enables CEBRA to accurately predict the locomotion of animals, identify active or passive behavior in primates, and represent stable neural patterns across different recording technologies, subjects, and species using its latent embeddings.</p><p>One amazing result of CEBRA is the reconstruction of videos from mouse visual cortical areas.<span><sup>3</sup></span> Neural activities during natural videos could be encoded in latent embeddings and then decoded with great accuracy. The video has millions of pixel dimensions with temporal dynamics, which have been compressed into neural representations. In this case, CEBRA can further compress them into only three latent dimensions, containing sufficient information to restore the raw videos. These findings demonstrate CEBRA's ability to identify common latent embeddings from visual input to brain activities. Furthermore, they suggest that the brain can compress and process external information in extremely low dimensions. Latent embeddings contain the intricate interactions between the world and the brain without any loss of information.</p><p>Although CEBRA demonstrates the state-of-the-art (SOTA) decoding accuracy of natural videos, the decoding occurs not in the frame contents but in the indexes. Recovering the visual inputs from neural activities is still an issue. One problem is the limitation of the number of recording neurons in current recording technologies, which lose considerable amounts of information. The direction but also the challenge is recording more neuronal activities and their physiological connections. This could reduce the error in estimating neural activity correlations. Another issue is the unclear analytic expression of latent embeddings. This dimness is primarily due to insufficient labeled variables, such as changing edges and instance deformations in videos. In the future, the advancement of neuroethological measurement technologies is essential to further enhance the performance of latent embeddings. Powerful neural networks, such as transformers, may be employed to process increasingly large datasets. An analytical expression is crucial to understand the intricacies of latent embeddings. CEBRA attains interpretability by utilizing low-order variables, such as position and velocity. However, to achieve high-order interpretability, more complex symbolic regression mechanisms are necessary.</p><p>Latent embeddings serve as a crucial intermediary for the transmission of information from the external world to the brain. However, the applications of latent embeddings are not only restricted to the brain. Under the background of clinical big data, latent embeddings have considerable potential for revealing the inner mechanism of disease related to the brain, genes, and other physiological indicators. The complex interactions between drugs and individuals could also be simplified in latent embeddings. Nevertheless, much research is needed to fully understand the underlying meaning of latent embeddings.</p><p><b>Yaning Han</b>, <b>Xiaoting Hou</b>, and <b>Chuanliang Han</b>: Conceptualization; writing – original draft; writing – review & editing.</p><p>The authors declare no competing interests.</p>\",\"PeriodicalId\":94303,\"journal\":{\"name\":\"Brain-X\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/brx2.40\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Brain-X\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/brx2.40\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brain-X","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/brx2.40","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Latent embeddings: An essential representation of brain–environment interactions
The brain governs the behaviors of natural species (including humans and animals), which serves as a central hub integrating incoming sensory signals from the constantly changing environment. Recent cutting-edge technologies in neuroscience from behavioral1 and neural levels2 have enabled precise and comprehensive measurements. However, the environment–brain–behavior dataset is difficult to interpret because of its high-dimensional nature. To address this challenge, latent embedding has emerged as a promising technique with the property of dimensionality reduction, which can facilitate the identification of common environment–brain–behavior patterns (Figure 1).
The main idea of extracting latent embeddings is to eliminate dataset redundancy. It requires an algorithm to transform the raw dataset to a new low-dimensional feature space with little information loss. Classically, principal component analysis has been used to linearly transform raw data to an orthogonal space. However, owing to the existence of non-linear structures in nature, the linear transform cannot avoid high information loss in low dimensions. Thus, several non-linear dimensionality reduction methods (t-distributed stochastic neighbor embedding [t-SNE]4 and uniform manifold approximation and projection for dimension reduction [UMAP]5) have been developed. However, their non-linear features can reduce the interpretability. For instance, the hippocampus is responsible for representing spatial information and the direction of travel, but pure data-driven latent embeddings (t-SNE or UMAP) may confuse these two functions. These two functions are executed simultaneously, which requires interpretable hypotheses to separate them. Pure data-driven methods cannot introduce existing assumptions to refine latent embeddings. However, using a recent neural network encoder (CEBRA),3 this problem can be fully solved. CEBRA addresses this issue by incorporating both supervised and self-supervised learning approaches. By providing supervision through space or direction labels, CEBRA can identify distinct coding patterns in the neural activities of the hippocampus across different latent dimensions, ensuring dimensional alignment with interpretable prior knowledge.
The main process of CEBRA uses contrastive learning, which was developed to obtain low-dimensional embeddings that are both interpretable and exhibit high performance across various applications.3 The contrastive learning technique aims to discover common and distinguishable attributes by contrasting samples, and it optimizes joint latent embeddings from multiple sources, including sensory inputs, brain activities, and behaviors. CEBRA's non-linear encoder combines input data from multiple modalities and uses auxiliary labels to enhance the interpretability. As a result, CEBRA can be applied to both static and dynamic variables, making it a versatile tool for analyzing environment–brain–behavior data. These characteristics enable CEBRA to identify valuable differences across multiple subjects and generate consistent latent embeddings that accurately represent the intrinsic and generalizable information flow across various types of data. This alignment enables CEBRA to accurately predict the locomotion of animals, identify active or passive behavior in primates, and represent stable neural patterns across different recording technologies, subjects, and species using its latent embeddings.
One amazing result of CEBRA is the reconstruction of videos from mouse visual cortical areas.3 Neural activities during natural videos could be encoded in latent embeddings and then decoded with great accuracy. The video has millions of pixel dimensions with temporal dynamics, which have been compressed into neural representations. In this case, CEBRA can further compress them into only three latent dimensions, containing sufficient information to restore the raw videos. These findings demonstrate CEBRA's ability to identify common latent embeddings from visual input to brain activities. Furthermore, they suggest that the brain can compress and process external information in extremely low dimensions. Latent embeddings contain the intricate interactions between the world and the brain without any loss of information.
Although CEBRA demonstrates the state-of-the-art (SOTA) decoding accuracy of natural videos, the decoding occurs not in the frame contents but in the indexes. Recovering the visual inputs from neural activities is still an issue. One problem is the limitation of the number of recording neurons in current recording technologies, which lose considerable amounts of information. The direction but also the challenge is recording more neuronal activities and their physiological connections. This could reduce the error in estimating neural activity correlations. Another issue is the unclear analytic expression of latent embeddings. This dimness is primarily due to insufficient labeled variables, such as changing edges and instance deformations in videos. In the future, the advancement of neuroethological measurement technologies is essential to further enhance the performance of latent embeddings. Powerful neural networks, such as transformers, may be employed to process increasingly large datasets. An analytical expression is crucial to understand the intricacies of latent embeddings. CEBRA attains interpretability by utilizing low-order variables, such as position and velocity. However, to achieve high-order interpretability, more complex symbolic regression mechanisms are necessary.
Latent embeddings serve as a crucial intermediary for the transmission of information from the external world to the brain. However, the applications of latent embeddings are not only restricted to the brain. Under the background of clinical big data, latent embeddings have considerable potential for revealing the inner mechanism of disease related to the brain, genes, and other physiological indicators. The complex interactions between drugs and individuals could also be simplified in latent embeddings. Nevertheless, much research is needed to fully understand the underlying meaning of latent embeddings.
Yaning Han, Xiaoting Hou, and Chuanliang Han: Conceptualization; writing – original draft; writing – review & editing.