{"title":"A rapid iterative trajectory planning method for automated parking through differential flatness","authors":"Zhouheng Li , Lei Xie , Cheng Hu , Hongye Su","doi":"10.1016/j.robot.2024.104816","DOIUrl":"10.1016/j.robot.2024.104816","url":null,"abstract":"<div><div>As autonomous driving continues to advance, automated parking is becoming increasingly essential. However, significant challenges arise when implementing path velocity decomposition (PVD) trajectory planning for automated parking. The primary challenge is ensuring rapid and precise collision-free trajectory planning, which is often in conflict. The secondary challenge involves maintaining sufficient control feasibility of the planned trajectory, particularly at gear shifting points (GSP). This paper proposes a PVD-based rapid iterative trajectory planning (RITP) method to solve the above challenges. The proposed method effectively balances the necessity for time efficiency and precise collision avoidance through a novel collision avoidance framework. Moreover, it enhances the overall control feasibility of the planned trajectory by incorporating the vehicle kinematics model and including terminal smoothing constraints (TSC) at GSP during path planning. Specifically, the proposed method leverages differential flatness to ensure the planned path adheres to the vehicle kinematic model. Additionally, it utilizes TSC to maintain curvature continuity at GSP, thereby enhancing the control feasibility of the overall trajectory. The simulation results demonstrate superior time efficiency and tracking errors compared to model-integrated and other iteration-based trajectory planning methods. In the real-world experiment, the proposed method was implemented and validated on a ROS-based vehicle, demonstrating the applicability of the RITP method for real vehicles.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"182 ","pages":"Article 104816"},"PeriodicalIF":4.3,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yan Gao , Feiqiang Lin , Boliang Cai , Jing Wu , Changyun Wei , Raphael Grech , Ze Ji
{"title":"Mapless navigation via Hierarchical Reinforcement Learning with memory-decaying novelty","authors":"Yan Gao , Feiqiang Lin , Boliang Cai , Jing Wu , Changyun Wei , Raphael Grech , Ze Ji","doi":"10.1016/j.robot.2024.104815","DOIUrl":"10.1016/j.robot.2024.104815","url":null,"abstract":"<div><div>Hierarchical Reinforcement Learning (HRL) has shown superior performance for mapless navigation tasks. However, it remains limited in unstructured environments that might contain terrains like long corridors and dead corners, which can lead to local minima. This is because most HRL-based mapless navigation methods employ a simplified reward setting and exploration strategy. In this work, we propose a novel reward function for training the high-level (HL) policy, which contains two components: extrinsic reward and intrinsic reward. The extrinsic reward encourages the robot to move towards the target location, while the intrinsic reward is computed based on novelty, episode memory and memory decaying, making the agent capable of accomplishing spontaneous exploration. We also design a novel neural network structure that incorporates an LSTM network to augment the agent with memory and reasoning capabilities. We test our method in unknown environments and specific scenarios prone to the local minimum problem to evaluate the navigation performance and local minimum resolution ability. The results show that our method significantly increases the success rate when compared to advanced RL-based methods, achieving a maximum improvement of nearly 28%. Our method demonstrates effective improvement in addressing the local minimum issue, especially in cases where the baselines fail completely. Additionally, numerous ablation studies consistently confirm the effectiveness of our proposed reward function and neural network structure.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"182 ","pages":"Article 104815"},"PeriodicalIF":4.3,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0921889024001994/pdfft?md5=d8582d1a21a5b1405a794b5c34b147fa&pid=1-s2.0-S0921889024001994-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142315533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shahab Heshmati-Alamdari , Maryam Sharifi , George C. Karras , George K. Fourlas
{"title":"Control barrier function based visual servoing for Mobile Manipulator Systems under functional limitations","authors":"Shahab Heshmati-Alamdari , Maryam Sharifi , George C. Karras , George K. Fourlas","doi":"10.1016/j.robot.2024.104813","DOIUrl":"10.1016/j.robot.2024.104813","url":null,"abstract":"<div><div>This paper proposes a new control strategy for Mobile Manipulator Systems (MMSs) that integrates image-based visual servoing (IBVS) to address operational limitations and safety constraints. The proposed approach based on the concept of control barrier functions (CBFs), provides a solution to address various operational challenges including visibility constraints, manipulator joint limits, predefined system velocity bounds, and system dynamic uncertainties. The proposed control strategy is a two-tiered structure, wherein the first level, a CBF-IBVS controller calculates control commands, taking into account the Field of View (FoV) constraints. By leveraging null space techniques, these commands are transposed to the joint-level configuration of the MMS, while considering system operational limits. Subsequently, in the second level, a CBF velocity controller employed for the entire MMS undertakes the tracking of the commands at the joint level, ensuring compliance with the predefined system’s velocity limitations as well as the safety of the whole combined system dynamics. The proposed control strategy offers superior transient and steady-state responses and heightened resilience to disturbances and modeling uncertainties. Furthermore, due to its low computational complexity, it can be easily implemented on an onboard computing system, facilitating real-time operation. The proposed strategy’s effectiveness is illustrated via simulation outcomes, which reveal enhanced performance and system safety compared to conventional IBVS methods. The results indicate that the proposed approach is effective in addressing the challenging operational limitations and safety constraints of mobile manipulator systems, making it suitable for practical applications.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"182 ","pages":"Article 104813"},"PeriodicalIF":4.3,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A survey of demonstration learning","authors":"André Correia, Luís A. Alexandre","doi":"10.1016/j.robot.2024.104812","DOIUrl":"10.1016/j.robot.2024.104812","url":null,"abstract":"<div><p>With the fast improvement of machine learning, reinforcement learning (RL) has been used to automate human tasks in different areas. However, training such agents is difficult and restricted to expert users. Moreover, it is mostly limited to simulation environments due to the high cost and safety concerns of interactions in the real-world. Demonstration Learning is a paradigm in which an agent learns to perform a task by imitating the behavior of an expert shown in demonstrations. Learning from demonstration accelerates the learning process by improving sample efficiency, while also reducing the effort of the programmer. Because the task is learned without interacting with the environment, demonstration learning allows the automation of a wide range of real-world applications such as robotics and healthcare. This paper provides a survey of demonstration learning, where we formally introduce the demonstration problem along with its main challenges and provide a comprehensive overview of the process of learning from demonstrations from the creation of the demonstration data set, to learning methods from demonstrations, and optimization by combining demonstration learning with different machine learning methods. We also review the existing benchmarks and identify their strengths and limitations. Additionally, we discuss the advantages and disadvantages of the paradigm as well as its main applications. Lastly, we discuss the open problems and future research directions of the field.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"182 ","pages":"Article 104812"},"PeriodicalIF":4.3,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0921889024001969/pdfft?md5=ca55398684a5261baba6c83e357cba9b&pid=1-s2.0-S0921889024001969-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuai He , Chaorong Zou , Zhen Deng , Weiwei Liu , Bingwei He , Jianwei Zhang
{"title":"Model-less optimal visual control of tendon-driven continuum robots using recurrent neural network-based neurodynamic optimization","authors":"Shuai He , Chaorong Zou , Zhen Deng , Weiwei Liu , Bingwei He , Jianwei Zhang","doi":"10.1016/j.robot.2024.104811","DOIUrl":"10.1016/j.robot.2024.104811","url":null,"abstract":"<div><p>Tendon-driven continuum robots (TDCRs) have infinite degrees of freedom and high flexibility, posing challenges for accurate modeling and autonomous control, especially in confined environments. This paper presents a model-less optimal visual control (MLOVC) method using neurodynamic optimization to enable autonomous target tracking of TDCRs in confined environments. The TDCR’s kinematics are estimated online from sensory data, establishing a connection between the actuator input and visual features. An optimal visual servoing method based on quadratic programming (QP) is developed to ensure precise target tracking without violating the robot’s physical constraints. An inverse-free recurrent neural network (RNN)-based neurodynamic optimization method is designed to solve the complex QP problem. Comparative simulations and experiments demonstrate that the proposed method outperforms existing methods in target tracking accuracy and computational efficiency. The RNN-based controller successfully achieves target tracking within constraints in confined environments.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"182 ","pages":"Article 104811"},"PeriodicalIF":4.3,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bio-inspired classification and evolution of multirotor Micro Aerial Vehicles (MAVs): A comprehensive review","authors":"Syed Waqar Hameed , Nursultan Imanberdiyev , Efe Camci , Wei-Yun Yau , Mir Feroskhan","doi":"10.1016/j.robot.2024.104802","DOIUrl":"10.1016/j.robot.2024.104802","url":null,"abstract":"<div><div>Multirotor Micro Aerial Vehicles (MAVs) have become essential in many applications like surveillance, disaster management, and aerial inspection. The diverse demands of these applications have led to numerous design innovations, growing the MAV landscape substantially. However, such growth has made it challenging to understand the evolution and classification of MAV designs based on their functions and features. We address this challenge by introducing a novel, bio-inspired taxonomic classification framework for MAVs. Our framework spans six hierarchical ranks, each containing a diverse set of categories that classify MAVs from distinct design perspectives. It enables a proper comparison of the MAV designs in the literature, revealing their key similarities and differences. It also helps to trace the evolution of MAVs over time, identifying research trends and potential gaps. Lastly, it offers insights into future MAV design trajectories, providing a complete and clear understanding of the MAV design landscape.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"182 ","pages":"Article 104802"},"PeriodicalIF":4.3,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142315534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiangxing Tian , Shanshan Zhang , Donglin Wang , Jinxin Liu , Shuyu Yang
{"title":"GSC: A graph-based skill composition framework for robot learning","authors":"Qiangxing Tian , Shanshan Zhang , Donglin Wang , Jinxin Liu , Shuyu Yang","doi":"10.1016/j.robot.2024.104787","DOIUrl":"10.1016/j.robot.2024.104787","url":null,"abstract":"<div><p>Humans excel at performing a wide range of sophisticated tasks by leveraging skills acquired from prior experiences. This characteristic is especially essential in robotics empowered by deep reinforcement learning, as learning every skill from scratch is time-consuming and may not always be feasible. With the prior skills incorporated, skill composition aims to accelerate the learning process on new robotic tasks. Previous works have given insight into combining pre-trained task-agnostic skills, whereas skills are transformed into fixed order representation, resulting in poor capturing of potential complex skill relations. In this paper, we novelly propose a <u>G</u>raph-based framework for <u>S</u>kill <u>C</u>omposition (GSC). To learn rich structural information, a carefully designed skill graph is constructed, where skill representations are taken as nodes and skill relations are utilized as edges. Furthermore, to allow it trained efficiently on large-scale skill set, a transformer-style graph updating method is employed to achieve comprehensive information aggregation. Our simulation experiments indicate that GSC outperforms the state-of-the-art methods on various challenging tasks. Additionally, we successfully apply the technique to the navigation task on a real quadruped robot. The project homepage can be found at <span><span>Graph Skill Composition</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"182 ","pages":"Article 104787"},"PeriodicalIF":4.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giovanni Stanco, Alessio Botta, Luigi Gallo, Giorgio Ventre
{"title":"DewROS2: A platform for informed Dew Robotics in ROS","authors":"Giovanni Stanco, Alessio Botta, Luigi Gallo, Giorgio Ventre","doi":"10.1016/j.robot.2024.104800","DOIUrl":"10.1016/j.robot.2024.104800","url":null,"abstract":"<div><p>With the shift from Cloud to Fog and Dew Robotics a lot of emphasis of the research community has been devoted to task offloading. Effective and efficient resource monitoring is however necessary for such offloading and it is also fundamental for other important safety and security tasks. Despite this, robot monitoring has received little attention in general and also for Robot Operating System (ROS) the most employed framework in robotics. In this paper DewROS2 is presented, a platform for Dew Robotics that comprises entities to monitor the system status and to share it with interested applications. The design and implementation of the platform is presented together with the monitoring entities created. DewROS2 has been deployed on different real devices, including an unmanned aerial vehicle and an industrial router, to move from theory to practice and to analyze the impact of monitoring on robot resources. DewROS2 has also been tested in a search and rescue use case where robots are used to collect and transmit videos to spot signs of humans in trouble. Results in controlled and uncontrolled conditions show that the monitoring nodes do not have a significant impact on the performance while providing important and measurable benefits to the applications. Accurately monitoring of robot resources, for example, allows the search and rescue application to almost double the utilization of the network, therefore collecting video at a much higher resolution.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"182 ","pages":"Article 104800"},"PeriodicalIF":4.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0921889024001842/pdfft?md5=5c297fef383277cab8dfb1d34ab6da82&pid=1-s2.0-S0921889024001842-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guillaume Bellegarda , Chuong Nguyen , Quan Nguyen
{"title":"Robust quadruped jumping via deep reinforcement learning","authors":"Guillaume Bellegarda , Chuong Nguyen , Quan Nguyen","doi":"10.1016/j.robot.2024.104799","DOIUrl":"10.1016/j.robot.2024.104799","url":null,"abstract":"<div><p>In this paper, we consider a general task of jumping varying distances and heights for a quadrupedal robot in noisy environments, such as off of uneven terrain and with variable robot dynamics parameters. To accurately jump in such conditions, we propose a framework using deep reinforcement learning that leverages and augments the complex solution of nonlinear trajectory optimization for quadrupedal jumping. While the standalone optimization limits jumping to take-off from flat ground and requires accurate assumptions of robot dynamics, our proposed approach improves the robustness to allow jumping off of significantly uneven terrain with variable robot dynamical parameters and environmental conditions. Compared with walking and running, the realization of aggressive jumping on hardware necessitates accounting for the motors’ torque-speed relationship as well as the robot’s total power limits. By incorporating these constraints into our learning framework, we successfully deploy our policy sim-to-real without further tuning, fully exploiting the available onboard power supply and motors. We demonstrate robustness to environment noise of foot disturbances of up to 6 cm in height, or 33% of the robot’s nominal standing height, while jumping 2<em>x</em> the body length in distance.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"182 ","pages":"Article 104799"},"PeriodicalIF":4.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0921889024001830/pdfft?md5=120dc8f6d5e39f92c0fe9b96f499ae52&pid=1-s2.0-S0921889024001830-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-objective QoS optimization in swarm robotics","authors":"Neda Mazloomi , Zohreh Zandinejad , Arash Zaretalab , Majid Gholipour","doi":"10.1016/j.robot.2024.104796","DOIUrl":"10.1016/j.robot.2024.104796","url":null,"abstract":"<div><p>The “Internet of Robotic Things” (IoRT) is a concept that connects sensors and robotic objects. One of the practical applications of IoRT is swarm robotics, where multiple robots collaborate in a shared workspace to accomplish assigned tasks that may be challenging or impossible for a single robot to conquer. Swarm robots are particularly useful in critical situations, such as post-earthquake scenarios, where they can locate survivors and provide assistance in areas inaccessible to humans. In these life-saving situations, reliable and prompt communication among swarm robots is of utmost importance. To address the need for highly dependable and low-latency communication in swarm robotics, this research introduces a novel hybrid approach called Multi-objective QoS optimization based on Support vector regression and Genetic algorithm (MQSG). The MQSG method consists of two main phases: Parameter Relationship Identification and Parameter Optimization. In the Parameter Relationship Identification phase, the relationship between network inputs (Packet inter-arrival time, Packet size, Transmission power, Distance between sender and receiver) and outputs (quality of service (QoS) parameters) is established using support vector regression. In the parameter optimization phase, a multi-objective function is created based on the obtained relationships from the Parameter Relationship Identification phase. By solving this multi-objective function, optimal values for each QoS parameter are determined, leading to enhanced network performance. Simulation results demonstrate that the MQSG method outperforms other similar algorithms in terms of transmission latency, packet delivery rate, and the number of retransmitted packets.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"182 ","pages":"Article 104796"},"PeriodicalIF":4.3,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}