Behavioral Decision-Making of Mobile Robots Simulating the Functions of Cerebellum, Basal Ganglia, and Hippocampus

Dongshu Wang;Qi Liu;Yihai Duan
{"title":"Behavioral Decision-Making of Mobile Robots Simulating the Functions of Cerebellum, Basal Ganglia, and Hippocampus","authors":"Dongshu Wang;Qi Liu;Yihai Duan","doi":"10.1109/TAI.2025.3534150","DOIUrl":null,"url":null,"abstract":"In unknown environments, behavioral decision-making of mobile robots is a crucial research topic in the field of robotics applications. To address the low learning ability and the difficulty of learning from the unknown environments for mobile robots, this work proposes a new learning model that integrates the supervised learning of the cerebellum, reinforcement learning of the basal ganglia, and memory consolidation of the hippocampus. First, to reduce the impact of noise on inputs and enhance the network's efficiency, a multineuron winning strategy and the refinement of the top-<inline-formula><tex-math>$k$</tex-math></inline-formula> competition mechanism have been adopted. Second, to increase the network's learning speed, a negative learning mechanism has been designed, which allows the robot to avoid obstacles more quickly by weakening the synaptic connections between error neurons. Third, to enhance the decision ability of cerebellar supervised learning, simulating the hippocampal memory consolidation mechanism, memory replay during the agent's offline state enables autonomous learning in the absence of real-time interactions. Finally, to better adjust the roles of cerebellar supervised learning and basal ganglia reinforcement learning in robot behavioral decision-making, a new similarity indicator has been designed. Simulation experiments and real-world experiments validate the effectiveness of the proposed model in this work.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 6","pages":"1639-1650"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10855684/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In unknown environments, behavioral decision-making of mobile robots is a crucial research topic in the field of robotics applications. To address the low learning ability and the difficulty of learning from the unknown environments for mobile robots, this work proposes a new learning model that integrates the supervised learning of the cerebellum, reinforcement learning of the basal ganglia, and memory consolidation of the hippocampus. First, to reduce the impact of noise on inputs and enhance the network's efficiency, a multineuron winning strategy and the refinement of the top-$k$ competition mechanism have been adopted. Second, to increase the network's learning speed, a negative learning mechanism has been designed, which allows the robot to avoid obstacles more quickly by weakening the synaptic connections between error neurons. Third, to enhance the decision ability of cerebellar supervised learning, simulating the hippocampal memory consolidation mechanism, memory replay during the agent's offline state enables autonomous learning in the absence of real-time interactions. Finally, to better adjust the roles of cerebellar supervised learning and basal ganglia reinforcement learning in robot behavioral decision-making, a new similarity indicator has been designed. Simulation experiments and real-world experiments validate the effectiveness of the proposed model in this work.
模拟小脑、基底神经节和海马体功能的移动机器人行为决策
在未知环境下,移动机器人的行为决策是机器人应用领域的一个重要研究课题。为了解决移动机器人学习能力低下和从未知环境中学习困难的问题,本研究提出了一种新的学习模型,该模型将小脑的监督学习、基底节区的强化学习和海马体的记忆巩固相结合。首先,为了减少噪声对输入的影响并提高网络的效率,采用了多神经元获胜策略和对top-$k$竞争机制的改进。其次,为了提高网络的学习速度,设计了一种负学习机制,通过削弱错误神经元之间的突触连接,使机器人能够更快地避开障碍物。第三,为了增强小脑监督学习的决策能力,模拟海马体记忆巩固机制,在agent离线状态下的记忆重播实现了在没有实时交互的情况下的自主学习。最后,为了更好地调节小脑监督学习和基底节区强化学习在机器人行为决策中的作用,设计了一个新的相似度指标。仿真实验和实际实验验证了该模型的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信