Investigating visual navigation using spiking neural network models of the insect mushroom bodies

Oluwaseyi Oladipupo Jesusanmi, Amany Azevedo Amin, Norbert Domcsek, James C. Knight, Andrew O. Philippides, Thomas Nowotny, Paul Graham
{"title":"Investigating visual navigation using spiking neural network models of the insect mushroom bodies","authors":"Oluwaseyi Oladipupo Jesusanmi, Amany Azevedo Amin, Norbert Domcsek, James C. Knight, Andrew O. Philippides, Thomas Nowotny, Paul Graham","doi":"10.3389/fphys.2024.1379977","DOIUrl":null,"url":null,"abstract":"Ants are capable of learning long visually guided foraging routes with limited neural resources. The visual scene memory needed for this behaviour is mediated by the mushroom bodies; an insect brain region important for learning and memory. In a visual navigation context, the mushroom bodies are theorised to act as familiarity detectors, guiding ants to views that are similar to those previously learned when first travelling along a foraging route. Evidence from behavioural experiments, computational studies and brain lesions all support this idea. Here we further investigate the role of mushroom bodies in visual navigation with a spiking neural network model learning complex natural scenes. By implementing these networks in GeNN–a library for building GPU accelerated spiking neural networks–we were able to test these models offline on an image database representing navigation through a complex outdoor natural environment, and also online embodied on a robot. The mushroom body model successfully learnt a large series of visual scenes (400 scenes corresponding to a 27 m route) and used these memories to choose accurate heading directions during route recapitulation in both complex environments. Through analysing our model’s Kenyon cell (KC) activity, we were able to demonstrate that KC activity is directly related to the respective novelty of input images. Through conducting a parameter search we found that there is a non-linear dependence between optimal KC to visual projection neuron (VPN) connection sparsity and the length of time the model is presented with an image stimulus. The parameter search also showed training the model on lower proportions of a route generally produced better accuracy when testing on the entire route. We embodied the mushroom body model and comparator visual navigation algorithms on a Quanser Q-car robot with all processing running on an Nvidia Jetson TX2. On a 6.5 m route, the mushroom body model had a mean distance to training route (error) of 0.144 ± 0.088 m over 5 trials, which was performance comparable to standard visual-only navigation algorithms. Thus, we have demonstrated that a biologically plausible model of the ant mushroom body can navigate complex environments both in simulation and the real world. Understanding the neural basis of this behaviour will provide insight into how neural circuits are tuned to rapidly learn behaviourally relevant information from complex environments and provide inspiration for creating bio-mimetic computer/robotic systems that can learn rapidly with low energy requirements.","PeriodicalId":504973,"journal":{"name":"Frontiers in Physiology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Physiology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fphys.2024.1379977","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Ants are capable of learning long visually guided foraging routes with limited neural resources. The visual scene memory needed for this behaviour is mediated by the mushroom bodies; an insect brain region important for learning and memory. In a visual navigation context, the mushroom bodies are theorised to act as familiarity detectors, guiding ants to views that are similar to those previously learned when first travelling along a foraging route. Evidence from behavioural experiments, computational studies and brain lesions all support this idea. Here we further investigate the role of mushroom bodies in visual navigation with a spiking neural network model learning complex natural scenes. By implementing these networks in GeNN–a library for building GPU accelerated spiking neural networks–we were able to test these models offline on an image database representing navigation through a complex outdoor natural environment, and also online embodied on a robot. The mushroom body model successfully learnt a large series of visual scenes (400 scenes corresponding to a 27 m route) and used these memories to choose accurate heading directions during route recapitulation in both complex environments. Through analysing our model’s Kenyon cell (KC) activity, we were able to demonstrate that KC activity is directly related to the respective novelty of input images. Through conducting a parameter search we found that there is a non-linear dependence between optimal KC to visual projection neuron (VPN) connection sparsity and the length of time the model is presented with an image stimulus. The parameter search also showed training the model on lower proportions of a route generally produced better accuracy when testing on the entire route. We embodied the mushroom body model and comparator visual navigation algorithms on a Quanser Q-car robot with all processing running on an Nvidia Jetson TX2. On a 6.5 m route, the mushroom body model had a mean distance to training route (error) of 0.144 ± 0.088 m over 5 trials, which was performance comparable to standard visual-only navigation algorithms. Thus, we have demonstrated that a biologically plausible model of the ant mushroom body can navigate complex environments both in simulation and the real world. Understanding the neural basis of this behaviour will provide insight into how neural circuits are tuned to rapidly learn behaviourally relevant information from complex environments and provide inspiration for creating bio-mimetic computer/robotic systems that can learn rapidly with low energy requirements.
利用昆虫蘑菇体的尖峰神经网络模型研究视觉导航
蚂蚁能够利用有限的神经资源学习长时间视觉引导的觅食路线。这种行为所需的视觉场景记忆由蘑菇体介导;蘑菇体是昆虫大脑中对学习和记忆非常重要的区域。据推测,在视觉导航环境中,蘑菇体可充当熟悉度探测器,引导蚂蚁找到与第一次沿觅食路线行进时所学到的景象相似的景象。来自行为实验、计算研究和脑损伤的证据都支持这一观点。在这里,我们通过学习复杂自然场景的尖峰神经网络模型,进一步研究了蘑菇体在视觉导航中的作用。通过在 GeNN--一个用于构建 GPU 加速尖峰神经网络的库--中实现这些网络,我们能够在代表复杂室外自然环境导航的图像数据库上离线测试这些模型,也能在机器人身上在线测试这些模型。蘑菇体模型成功学习了大量视觉场景(400 个场景对应 27 米的路线),并利用这些记忆在这两种复杂环境中重现路线时选择了准确的航向。通过分析我们模型的凯尼恩细胞(KC)活动,我们能够证明 KC 活动与输入图像各自的新颖性直接相关。通过参数搜索,我们发现 KC 与视觉投射神经元(VPN)的最佳连接稀疏度与模型接受图像刺激的时间长短之间存在非线性关系。参数搜索还表明,在对整条路线进行测试时,在较低比例的路线上对模型进行训练通常会产生更好的准确性。我们在 Quanser Q-car 机器人上体现了蘑菇体模型和比较器视觉导航算法,所有处理均在 Nvidia Jetson TX2 上运行。在一条 6.5 米长的路线上,蘑菇身体模型在 5 次试验中与训练路线的平均距离(误差)为 0.144 ± 0.088 米,与标准的纯视觉导航算法性能相当。因此,我们已经证明,一个生物学上可信的蚂蚁蘑菇体模型可以在模拟和现实世界中的复杂环境中导航。了解这种行为的神经基础,将有助于深入了解神经回路是如何调整的,以便从复杂环境中快速学习与行为相关的信息,并为创建生物仿真计算机/机器人系统提供灵感,使其能够以较低的能量需求快速学习。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信