{"title":"受定向位置细胞启发的矢量导航。","authors":"Harrison Espino, Jeffrey L Krichmar","doi":"10.1007/978-3-031-71533-4_3","DOIUrl":null,"url":null,"abstract":"<p><p>We introduce a navigation algorithm inspired by directional sensitivity observed in CA1 place cells of the rat hippocampus. These cells exhibit directional polarization characterized by vector fields converging to specific locations in the environment, known as ConSinks [8]. By sampling from a population of such cells at varying orientations, an optimal vector of travel towards a goal can be determined. Our proposed algorithm aims to emulate this mechanism for learning goal-directed navigation tasks. We employ a novel learning rule that integrates environmental reward signals with an eligibility trace to determine the update eligibility of a cell's directional sensitivity. Compared to state-of-the-art Reinforcement Learning algorithms, our approach demonstrates superior performance and speed in learning to navigate towards goals in obstacle-filled environments. Additionally, we observe analogous behavior in our algorithm to experimental evidence, where the mean ConSink location dynamically shifts toward a new goal shortly after it is introduced.</p>","PeriodicalId":520536,"journal":{"name":"From animals to animats : proceedings of the ... International Conference on Simulation of Adaptive Behavior. International Conference on Simulation of Adaptive Behavior","volume":"14993 ","pages":"27-38"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12031638/pdf/","citationCount":"0","resultStr":"{\"title\":\"Vector-Based Navigation Inspired by Directional Place Cells.\",\"authors\":\"Harrison Espino, Jeffrey L Krichmar\",\"doi\":\"10.1007/978-3-031-71533-4_3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>We introduce a navigation algorithm inspired by directional sensitivity observed in CA1 place cells of the rat hippocampus. These cells exhibit directional polarization characterized by vector fields converging to specific locations in the environment, known as ConSinks [8]. By sampling from a population of such cells at varying orientations, an optimal vector of travel towards a goal can be determined. Our proposed algorithm aims to emulate this mechanism for learning goal-directed navigation tasks. We employ a novel learning rule that integrates environmental reward signals with an eligibility trace to determine the update eligibility of a cell's directional sensitivity. Compared to state-of-the-art Reinforcement Learning algorithms, our approach demonstrates superior performance and speed in learning to navigate towards goals in obstacle-filled environments. Additionally, we observe analogous behavior in our algorithm to experimental evidence, where the mean ConSink location dynamically shifts toward a new goal shortly after it is introduced.</p>\",\"PeriodicalId\":520536,\"journal\":{\"name\":\"From animals to animats : proceedings of the ... International Conference on Simulation of Adaptive Behavior. International Conference on Simulation of Adaptive Behavior\",\"volume\":\"14993 \",\"pages\":\"27-38\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12031638/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"From animals to animats : proceedings of the ... International Conference on Simulation of Adaptive Behavior. International Conference on Simulation of Adaptive Behavior\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/978-3-031-71533-4_3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/9/7 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"From animals to animats : proceedings of the ... International Conference on Simulation of Adaptive Behavior. International Conference on Simulation of Adaptive Behavior","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-031-71533-4_3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/9/7 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
Vector-Based Navigation Inspired by Directional Place Cells.
We introduce a navigation algorithm inspired by directional sensitivity observed in CA1 place cells of the rat hippocampus. These cells exhibit directional polarization characterized by vector fields converging to specific locations in the environment, known as ConSinks [8]. By sampling from a population of such cells at varying orientations, an optimal vector of travel towards a goal can be determined. Our proposed algorithm aims to emulate this mechanism for learning goal-directed navigation tasks. We employ a novel learning rule that integrates environmental reward signals with an eligibility trace to determine the update eligibility of a cell's directional sensitivity. Compared to state-of-the-art Reinforcement Learning algorithms, our approach demonstrates superior performance and speed in learning to navigate towards goals in obstacle-filled environments. Additionally, we observe analogous behavior in our algorithm to experimental evidence, where the mean ConSink location dynamically shifts toward a new goal shortly after it is introduced.