Ke Wang , Zhaoyang Jacopo Hu , Peter Tisnikar , Oskar Helander , Digby Chappell , Petar Kormushev
{"title":"When and where to step: Terrain-aware real-time footstep location and timing optimization for bipedal robots","authors":"Ke Wang , Zhaoyang Jacopo Hu , Peter Tisnikar , Oskar Helander , Digby Chappell , Petar Kormushev","doi":"10.1016/j.robot.2024.104742","DOIUrl":"https://doi.org/10.1016/j.robot.2024.104742","url":null,"abstract":"<div><p>Online footstep planning is essential for bipedal walking robots, allowing them to walk in the presence of disturbances and sensory noise. Most of the literature on the topic has focused on optimizing the footstep placement while keeping the step timing constant. In this work, we introduce a footstep planner capable of optimizing footstep placement and step time online. The proposed planner, consisting of an Interior Point Optimizer (IPOPT) and an optimizer based on Augmented Lagrangian (AL) method with analytical gradient descent, solves the full dynamics of the Linear Inverted Pendulum (LIP) model in real time to optimize for footstep location as well as step timing at the rate of 200 Hz. We show that such asynchronous real-time optimization with the AL method (ARTO-AL) provides the required robustness and speed for successful online footstep planning. Furthermore, ARTO-AL can be extended to plan footsteps in 3D, allowing terrain-aware footstep planning on uneven terrains. Compared to an algorithm with no footstep time adaptation, our proposed ARTO-AL demonstrates increased stability in simulated walking experiments as it can resist pushes on flat ground and on a <span><math><mrow><mn>10</mn><mo>°</mo></mrow></math></span> ramp up to 120 N and 100 N respectively. Videos<span><sup>2</sup></span> and open-source code<span><sup>3</sup></span> are released.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S092188902400126X/pdfft?md5=599882b40704d445bb509be303dd3163&pid=1-s2.0-S092188902400126X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141434213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UAV path planning algorithm based on Deep Q-Learning to search for a floating lost target in the ocean","authors":"Mehrez Boulares, Afef Fehri, Mohamed Jemni","doi":"10.1016/j.robot.2024.104730","DOIUrl":"10.1016/j.robot.2024.104730","url":null,"abstract":"<div><p>In the context of real world application, Search and Rescue Missions on the ocean surface remain a complex task due to the large-scale area and the forces of the ocean currents, spreading lost targets and debris in an unpredictable way. In this work, we present a Path Planning Approach to search for a lost target on ocean surface using a swarm of UAVs. The combination of GlobCurrent dataset and a Lagrangian simulator is used to determine where the particles are moved by the ocean currents forces while Deep Q-learning algorithm is applied to learn from their dynamics. The evaluation results of the trained models show that our search strategy is effective and efficient. Over a total search area (red Sea zone), surface of 453422 Km<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>, we have shown that our strategy Search Success Rate is 98.61%, the maximum Search Time to detection is 15 days and the average Search Time to detection is almost 15 h.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141399395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Georges Younes , Douaa Khalil , John Zelek , Daniel Asmar
{"title":"H-SLAM: Hybrid direct–indirect visual SLAM","authors":"Georges Younes , Douaa Khalil , John Zelek , Daniel Asmar","doi":"10.1016/j.robot.2024.104729","DOIUrl":"https://doi.org/10.1016/j.robot.2024.104729","url":null,"abstract":"<div><p>The recent success of hybrid methods in monocular odometry has led to many attempts to generalize the performance gains to hybrid monocular SLAM. However, most attempts fall short in several respects, with the most prominent issue being the need for two different map representations (local and global maps), with each requiring different, computationally expensive, and often redundant processes to maintain. Moreover, these maps tend to drift with respect to each other, resulting in contradicting pose and scene estimates, and leading to catastrophic failure. In this paper, we propose a novel approach that makes use of descriptor sharing to generate a single inverse depth scene representation. This representation can be used locally, queried globally to perform loop closure, and has the ability to re-activate previously observed map points after redundant points are marginalized from the local map, eliminating the need for separate map maintenance processes. The maps generated by our method exhibit no drift between each other, and can be computed at a fraction of the computational cost and memory footprint required by other monocular SLAM systems. Despite the reduced resource requirements, the proposed approach maintains its robustness and accuracy, delivering performance comparable to state-of-the-art SLAM methods (<em>e.g</em>., LDSO, ORB-SLAM3) on the majority of sequences from well-known datasets like EuRoC, KITTI, and TUM VI. The source code is available at: <span>https://github.com/AUBVRL/fslam_ros_docker</span><svg><path></path></svg>.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141313653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arodh Lal Karn , Sudhakar Sengan , Ketan Kotecha , Irina V Pustokhina , Denis A Pustokhin , V Subramaniyaswamy , Dharam Buddhi
{"title":"Corrigendum to “ICACIA: An Intelligent Context-Aware framework for COBOT in defense industry using ontological and deep learning models” [Robotics and Autonomous Systems Volume 157, November 2022, 104234]","authors":"Arodh Lal Karn , Sudhakar Sengan , Ketan Kotecha , Irina V Pustokhina , Denis A Pustokhin , V Subramaniyaswamy , Dharam Buddhi","doi":"10.1016/j.robot.2024.104726","DOIUrl":"https://doi.org/10.1016/j.robot.2024.104726","url":null,"abstract":"","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0921889024001106/pdfft?md5=2ac8751b4f87547e5795f759d0dd0b6b&pid=1-s2.0-S0921889024001106-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141244851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marija Popović , Joshua Ott , Julius Rückin , Mykel J. Kochenderfer
{"title":"Learning-based methods for adaptive informative path planning","authors":"Marija Popović , Joshua Ott , Julius Rückin , Mykel J. Kochenderfer","doi":"10.1016/j.robot.2024.104727","DOIUrl":"10.1016/j.robot.2024.104727","url":null,"abstract":"<div><p>Adaptive informative path planning (AIPP) is important to many robotics applications, enabling mobile robots to efficiently collect useful data about initially unknown environments. In addition, learning-based methods are increasingly used in robotics to enhance adaptability, versatility, and robustness across diverse and complex tasks. Our survey explores research on applying robotic learning to AIPP, bridging the gap between these two research fields. We begin by providing a unified mathematical problem definition for general AIPP problems. Next, we establish two complementary taxonomies of current work from the perspectives of (i) learning algorithms and (ii) robotic applications. We explore synergies, recent trends, and highlight the benefits of learning-based methods in AIPP frameworks. Finally, we discuss key challenges and promising future directions to enable more generally applicable and robust robotic data-gathering systems through learning. We provide a comprehensive catalog of papers reviewed in our survey, including publicly available repositories, to facilitate future studies in the field.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0921889024001118/pdfft?md5=28de4de4b6cc186bd0057d379cd895ba&pid=1-s2.0-S0921889024001118-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141279580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinsuk Choi , Wookyong Kwon , Kwanwoong Yoon , Seongwon Yoon , Young Sam Lee , Soo Jeon , Soohee Han
{"title":"Suppressing violent sloshing flow in food serving robots","authors":"Jinsuk Choi , Wookyong Kwon , Kwanwoong Yoon , Seongwon Yoon , Young Sam Lee , Soo Jeon , Soohee Han","doi":"10.1016/j.robot.2024.104728","DOIUrl":"https://doi.org/10.1016/j.robot.2024.104728","url":null,"abstract":"<div><p>This article presents the self-balancing slosh-free control (SBSFC) scheme, a notable advancement for stable navigation in food-serving robots. The uniqueness of SBSFC is that it does not require direct modeling of slosh dynamics. Utilizing just two inertial measurement units (IMUs), the proposed scheme offers an online solution, obviating the need for complex dynamics or high-cost supplementary systems. Central to this work is the design of a control strategy favorable for sloshing suppression, achieved through feedforward reference shaping and disturbance compensation. This means the SBSFC indirectly alleviates and compensates for sloshing effects, rather than directly controlling them as a state variable by relying on pixel-based measurements of sloshing. Key contributions include rapid slosh damping via reference shaping, robust posture stabilization through optimal control, and enhanced disturbance handling with a disturbance observer. These strategies synergistically ensure immediate vibration reduction and long-term stability under real-world conditions. This study is expected to lead to a significant leap forward in commercial food-serving robotics.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Doganay Sirintuna , Theodora Kastritsi , Idil Ozdamar , Juan M. Gandarias , Arash Ajoudani
{"title":"Enhancing human–robot collaborative transportation through obstacle-aware vibrotactile warning and virtual fixtures","authors":"Doganay Sirintuna , Theodora Kastritsi , Idil Ozdamar , Juan M. Gandarias , Arash Ajoudani","doi":"10.1016/j.robot.2024.104725","DOIUrl":"10.1016/j.robot.2024.104725","url":null,"abstract":"<div><p>Transporting large and heavy objects can benefit from Human–Robot Collaboration (HRC), increasing the contribution of robots to our daily tasks and addressing challenges arising from labor shortages. This strategy typically positions the human collaborator as the leader, with the robot assuming the follower role. However, when transporting large objects, the operator’s situational awareness can be compromised as the objects may occlude different parts of the environment, weakening the human leader’s decision-making capacity and leading to failure due to collision. This paper proposes a situational awareness framework for collaborative transportation to face this challenge. The framework integrates a multi-modal haptic-based Obstacle Feedback Module with two units. The first unit consists of a warning module that alerts the operator through a haptic belt with four vibrotactile devices that provide feedback about the location and proximity of the obstacles. The second unit implements virtual fixtures as hard constraints for mobility. The warning feedback and the virtual fixtures act online based on the information given by two Lidars mounted on a mobile manipulator to detect the obstacles in the surroundings. By enhancing the operator’s awareness of the environment, the proposed module improves the safety of the human–robot team in collaborative transportation scenarios by preventing collisions. Experiments with 16 non-expert subjects in four feedback modalities during four scenarios report an objective evaluation thanks to quantitative metrics and subjective evaluations based on user-level experiences. The results reveal the strengths and weaknesses of the implemented feedback modalities while providing solid evidence of the increased situational awareness of the operator when the two haptic units are employed.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0921889024001088/pdfft?md5=e83bacf7a309029949012e8f8a6e240a&pid=1-s2.0-S0921889024001088-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141140168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Long-term navigation for autonomous robots based on spatio-temporal map prediction","authors":"Yanbo Wang, Yaxian Fan, Jingchuan Wang, Weidong Chen","doi":"10.1016/j.robot.2024.104724","DOIUrl":"10.1016/j.robot.2024.104724","url":null,"abstract":"<div><p>The robotics community has witnessed a growing demand for long-term navigation of autonomous robots in diverse environments, including factories, homes, offices, and public places. The core challenge in long-term navigation for autonomous robots lies in effectively adapting to varying degrees of dynamism in the environment. In this paper, we propose a long-term navigation method for autonomous robots based on spatio-temporal map prediction. The time series model is introduced to learn the changing patterns of different environmental structures or objects on multiple time scales based on the historical maps and forecast the future maps for long-term navigation. Then, an improved global path planning algorithm is performed based on the time-variant predicted cost maps. During navigation, the current observations are fused with the predicted map through a modified Bayesian filter to reduce the impact of prediction errors, and the updated map is stored for future predictions. We run simulation and conduct several weeks of experiments in multiple scenarios. The results show that our algorithm is effective and robust for long-term navigation in dynamic environments.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141142091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ambuj, Harsh Nagar, Ayan Paul, Rajendra Machavaram, Peeyush Soni
{"title":"“Reinforcement learning particle swarm optimization based trajectory planning of autonomous ground vehicle using 2D LiDAR point cloud”","authors":"Ambuj, Harsh Nagar, Ayan Paul, Rajendra Machavaram, Peeyush Soni","doi":"10.1016/j.robot.2024.104723","DOIUrl":"10.1016/j.robot.2024.104723","url":null,"abstract":"<div><p>The advent of autonomous mobile robots has spurred research into efficient trajectory planning methods, particularly in dynamic environments with varied obstacles. This study focuses on optimizing trajectory planning for an Autonomous Ground Vehicle (AGV) using a novel Reinforcement Learning Particle Swarm Optimization (RLPSO) algorithm. Real-time mobile robot localization and map generation are introduced through the utilization of the Hector-SLAM algorithm within the Robot Operating System (ROS) platform, resulting in the creation of a binary occupancy grid. The present research thoroughly investigates the performance of the RLPSO algorithm, juxtaposed against five established Particle Swarm Optimization (PSO) variants, within the context of four distinct physical environments. The experimental design is tailored to emulate real-world scenarios, encompassing a spectrum of challenges posed by static and dynamic obstacles. The AGV, equipped with LiDAR sensors, navigates through diverse environments characterized by obstacles of different geometries. The RLPSO algorithm dynamically adapts its strategies based on feedback, enabling adaptable trajectory planning while effectively avoiding obstacles. Numerical results obtained from extensive experimentation highlight the algorithm's efficacy. The navigational model's validation is achieved within a MATLAB 2D virtual environment, employing 2D Lidar mapping point data. Transitioning to physical experiments with an AGV, RLPSO continues to demonstrate superior performance, showcasing its potential for real-world applications in autonomous navigation. On average, RLPSO achieves a 10–15 % reduction in path distances and traversal time compared to the following best-performing PSO variant across diverse scenarios. The adaptive nature of RLPSO, informed by feedback from the environment, distinguishes it as a promising solution for autonomous navigation in dynamic settings, with implications for practical implementation in real-world scenarios.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141143407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaotao Shan, Yichao Jin, Marius Jurt, Peizheng Li
{"title":"A distributed multi-robot task allocation method for time-constrained dynamic collective transport","authors":"Xiaotao Shan, Yichao Jin, Marius Jurt, Peizheng Li","doi":"10.1016/j.robot.2024.104722","DOIUrl":"10.1016/j.robot.2024.104722","url":null,"abstract":"<div><p>Recent studies in warehouse logistics have highlighted the importance of multi-robot collaboration in collective transport scenarios, where multiple robots work together to lift and transport bulky and heavy items. However, limited attention has been given to task allocation in such scenarios, particularly when dealing with continuously arriving tasks and time constraints. In this paper, we propose a decentralized auction-based method to address this challenge. Our approach involves robots predicting the task choices of their peers, estimating the values and partnerships associated with multi-robot tasks, and ultimately determining their task choices and collaboration partners through an auction process. A unique “suggestion” mechanism is introduced to the auction process to mitigate the decision bias caused by the leader–follower mode inherent in typical auction-based methods. Additionally, an available time update mechanism is designed to prevent the accumulation of schedule deviations during the robots’ operation process. Through extensive simulations, we demonstrate the superior performance and computational efficiency of the proposed algorithm compared to both the Agent-Based Sequential Greedy Algorithm and the Consensus-Based Time Table Algorithm, in both dynamic and static scenarios.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141142104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}