{"title":"Leveraging motion perceptibility and deep reinforcement learning for visual control of nonholonomic mobile robots","authors":"Takieddine Soualhi, Nathan Crombez, Alexandre Lombard, Yassine Ruichek, Stéphane Galland","doi":"10.1016/j.robot.2025.104920","DOIUrl":null,"url":null,"abstract":"<div><div>This paper introduces a novel deep reinforcement learning framework to tackle the problem of visual servoing of nonholonomic mobile robots. The visual control of nonholonomic mobile robots becomes particularly challenging within the classical paradigm of visual servoing, mainly due to motion and visibility constraints, which makes it impossible to reach a given desired pose for certain configurations without losing essential visual information from the camera field of view. Previous work has demonstrated the effectiveness of deep reinforcement learning in addressing various vision-based robotics tasks. In light of this, we propose a framework that integrates deep recurrent policies, intrinsic motivation, and a novel auxiliary task that leverages the interaction matrix, the core of classical visual servoing approaches, to address the problem of vision-based control of nonholonomic robotic systems. Firstly, we analyze the influence of the nonholonomic constraints on control policy learning. Subsequently, we validate and evaluate our approach in both simulated and real-world environments. Our approach exhibits an emergent control behavior that enables the robot to accurately attain the desired pose while maintaining the desired visual content within the camera’s field of view. The proposed method outperforms the state-of-the-art approaches, demonstrating its effectiveness, robustness, and accuracy.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"189 ","pages":"Article 104920"},"PeriodicalIF":4.3000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Autonomous Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0921889025000065","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
This paper introduces a novel deep reinforcement learning framework to tackle the problem of visual servoing of nonholonomic mobile robots. The visual control of nonholonomic mobile robots becomes particularly challenging within the classical paradigm of visual servoing, mainly due to motion and visibility constraints, which makes it impossible to reach a given desired pose for certain configurations without losing essential visual information from the camera field of view. Previous work has demonstrated the effectiveness of deep reinforcement learning in addressing various vision-based robotics tasks. In light of this, we propose a framework that integrates deep recurrent policies, intrinsic motivation, and a novel auxiliary task that leverages the interaction matrix, the core of classical visual servoing approaches, to address the problem of vision-based control of nonholonomic robotic systems. Firstly, we analyze the influence of the nonholonomic constraints on control policy learning. Subsequently, we validate and evaluate our approach in both simulated and real-world environments. Our approach exhibits an emergent control behavior that enables the robot to accurately attain the desired pose while maintaining the desired visual content within the camera’s field of view. The proposed method outperforms the state-of-the-art approaches, demonstrating its effectiveness, robustness, and accuracy.
期刊介绍:
Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.
Robotics and Autonomous Systems will carry articles on the theoretical, computational and experimental aspects of autonomous systems, or modules of such systems.