{"title":"Behavioral Decision-Making of Mobile Robots Simulating the Functions of Cerebellum, Basal Ganglia, and Hippocampus","authors":"Dongshu Wang;Qi Liu;Yihai Duan","doi":"10.1109/TAI.2025.3534150","DOIUrl":null,"url":null,"abstract":"In unknown environments, behavioral decision-making of mobile robots is a crucial research topic in the field of robotics applications. To address the low learning ability and the difficulty of learning from the unknown environments for mobile robots, this work proposes a new learning model that integrates the supervised learning of the cerebellum, reinforcement learning of the basal ganglia, and memory consolidation of the hippocampus. First, to reduce the impact of noise on inputs and enhance the network's efficiency, a multineuron winning strategy and the refinement of the top-<inline-formula><tex-math>$k$</tex-math></inline-formula> competition mechanism have been adopted. Second, to increase the network's learning speed, a negative learning mechanism has been designed, which allows the robot to avoid obstacles more quickly by weakening the synaptic connections between error neurons. Third, to enhance the decision ability of cerebellar supervised learning, simulating the hippocampal memory consolidation mechanism, memory replay during the agent's offline state enables autonomous learning in the absence of real-time interactions. Finally, to better adjust the roles of cerebellar supervised learning and basal ganglia reinforcement learning in robot behavioral decision-making, a new similarity indicator has been designed. Simulation experiments and real-world experiments validate the effectiveness of the proposed model in this work.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 6","pages":"1639-1650"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10855684/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In unknown environments, behavioral decision-making of mobile robots is a crucial research topic in the field of robotics applications. To address the low learning ability and the difficulty of learning from the unknown environments for mobile robots, this work proposes a new learning model that integrates the supervised learning of the cerebellum, reinforcement learning of the basal ganglia, and memory consolidation of the hippocampus. First, to reduce the impact of noise on inputs and enhance the network's efficiency, a multineuron winning strategy and the refinement of the top-$k$ competition mechanism have been adopted. Second, to increase the network's learning speed, a negative learning mechanism has been designed, which allows the robot to avoid obstacles more quickly by weakening the synaptic connections between error neurons. Third, to enhance the decision ability of cerebellar supervised learning, simulating the hippocampal memory consolidation mechanism, memory replay during the agent's offline state enables autonomous learning in the absence of real-time interactions. Finally, to better adjust the roles of cerebellar supervised learning and basal ganglia reinforcement learning in robot behavioral decision-making, a new similarity indicator has been designed. Simulation experiments and real-world experiments validate the effectiveness of the proposed model in this work.