Autonomous RobotsPub Date : 2023-09-23DOI: 10.1007/s10514-023-10137-1
Tara Boroushaki, Laura Dodds, Nazish Naeem, Fadel Adib
{"title":"FuseBot: mechanical search of rigid and deformable objects via multi-modal perception","authors":"Tara Boroushaki, Laura Dodds, Nazish Naeem, Fadel Adib","doi":"10.1007/s10514-023-10137-1","DOIUrl":"10.1007/s10514-023-10137-1","url":null,"abstract":"<div><p>Mechanical search is a robotic problem where a robot needs to retrieve a target item that is partially or fully-occluded from its camera. State-of-the-art approaches for mechanical search either require an expensive search process to find the target item, or they require the item to be tagged with a radio frequency identification tag (e.g., RFID), making their approach beneficial only to tagged items in the environment. We present FuseBot, the first robotic system for RF-Visual mechanical search that enables efficient retrieval of both RF-tagged and untagged items in a pile. Rather than requiring all target items in a pile to be RF-tagged, FuseBot leverages the mere existence of an RF-tagged item in the pile to benefit both tagged and untagged items. Our design introduces two key innovations. The first is <i>RF-Visual Mapping</i>, a technique that identifies and locates RF-tagged items in a pile and uses this information to construct an RF-Visual occupancy distribution map. The second is <i>RF-Visual Extraction</i>, a policy formulated as an optimization problem that minimizes the number of actions required to extract the target object by accounting for the probabilistic occupancy distribution, the expected grasp quality, and the expected information gain from future actions. We built a real-time end-to-end prototype of our system on a UR5e robotic arm with in-hand vision and RF perception modules. We conducted over 200 real-world experimental trials to evaluate FuseBot and compare its performance to a state-of-the-art vision-based system named X-Ray (Danielczuk et al., in: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, 2020). Our experimental results demonstrate that FuseBot outperforms X-Ray’s efficiency by more than 40% in terms of the number of actions required for successful mechanical search. Furthermore, in comparison to X-Ray’s success rate of 84%, FuseBot achieves a success rate of 95% in retrieving untagged items, demonstrating for the first time that the benefits of RF perception extend beyond tagged objects in the mechanical search problem.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1137 - 1154"},"PeriodicalIF":3.5,"publicationDate":"2023-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10137-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135958951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-09-22DOI: 10.1007/s10514-023-10138-0
Marco Leonardi, Annette Stahl, Edmund Førland Brekke, Martin Ludvigsen
{"title":"UVS: underwater visual SLAM—a robust monocular visual SLAM system for lifelong underwater operations","authors":"Marco Leonardi, Annette Stahl, Edmund Førland Brekke, Martin Ludvigsen","doi":"10.1007/s10514-023-10138-0","DOIUrl":"10.1007/s10514-023-10138-0","url":null,"abstract":"<div><p>In this paper, a visual simultaneous localization and mapping (VSLAM/visual SLAM) system called underwater visual SLAM (UVS) system is presented, specifically tailored for camera-only navigation in natural underwater environments. The UVS system is particularly optimized towards precision and robustness, as well as lifelong operations. We build upon Oriented features from accelerated segment test and Rotated Binary robust independent elementary features simultaneous localization and mapping (ORB-SLAM) and improve the accuracy by performing an exact search in the descriptor space during triangulation and the robustness by utilizing a unified initialization method and a motion model. In addition, we present a scale-agnostic station-keeping detection, which aims to optimize the map and poses during station-keeping, and a pruning strategy, which takes into account the point’s age and distance to the active keyframe. An exhaustive evaluation is presented to the reader, using a total of 38 in-air and underwater sequences.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1367 - 1385"},"PeriodicalIF":3.5,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10138-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136015937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-09-13DOI: 10.1007/s10514-023-10126-4
Christopher Heintz, Sean C. C. Bailey, Jesse B. Hoagg
{"title":"Formation control for autonomous fixed-wing air vehicles with strict speed constraints","authors":"Christopher Heintz, Sean C. C. Bailey, Jesse B. Hoagg","doi":"10.1007/s10514-023-10126-4","DOIUrl":"10.1007/s10514-023-10126-4","url":null,"abstract":"<div><p>We present a formation-control algorithm for autonomous fixed-wing air vehicles. The desired inter-vehicle positions are time-varying, and we assume that at least one vehicle has access to a measurement its position relative to the leader, which can be a physical or virtual member of the formation. Each vehicle is modeled with extended unicycle dynamics that include orientation kinematics on SO(3), speed dynamics, and strict constraints on speed (i.e., ground speed). The analytic result shows that the vehicles converge exponentially to the desired relative positions with each other and the leader. We also show that each vehicle’s speed satisfies the speed constraints. The formation algorithm is demonstrated in software-in-the-loop (SITL) simulations and experiments with fixed-wing air vehicles. To implement the formation-control algorithm, each vehicle has middle-loop controllers to determine roll, pitch, and throttle commands from the outer-loop formation control. We present SITL simulations with 4 fixed-wing air vehicles that demonstrate formation control with different communication structures. Finally, we present formation-control experiments with up to 3 fixed-wing air vehicles.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1299 - 1323"},"PeriodicalIF":3.5,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135742202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-09-08DOI: 10.1007/s10514-023-10130-8
Charles Schaff, Audrey Sedal, Shiyao Ni, Matthew R. Walter
{"title":"Sim-to-real transfer of co-optimized soft robot crawlers","authors":"Charles Schaff, Audrey Sedal, Shiyao Ni, Matthew R. Walter","doi":"10.1007/s10514-023-10130-8","DOIUrl":"10.1007/s10514-023-10130-8","url":null,"abstract":"<div><p>This work provides a complete framework for the simulation, co-optimization, and sim-to-real transfer of the design and control of soft legged robots. Soft robots have “mechanical intelligence”: the ability to passively exhibit behaviors that would otherwise be difficult to program. Exploiting this capacity requires consideration of the coupling between design and control. Co-optimization provides a way to reason over this coupling. Yet, it is difficult to achieve simulations that are both sufficiently accurate to allow for sim-to-real transfer and fast enough for contemporary co-optimization algorithms. We describe a modularized model order reduction algorithm that improves simulation efficiency, while preserving the accuracy required to learn effective soft robot design and control. We propose a reinforcement learning-based co-optimization framework that identifies several soft crawling robots that outperform an expert baseline with zero-shot sim-to-real transfer. We study generalization of the framework to new terrains, and the efficacy of domain randomization as a means to improve sim-to-real transfer.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1195 - 1211"},"PeriodicalIF":3.5,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46827720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-09-01DOI: 10.1007/s10514-023-10128-2
Grzegorz Malczyk, Maximilian Brunner, Eugenio Cuniato, Marco Tognon, Roland Siegwart
{"title":"Multi-directional Interaction Force Control with an Aerial Manipulator Under External Disturbances","authors":"Grzegorz Malczyk, Maximilian Brunner, Eugenio Cuniato, Marco Tognon, Roland Siegwart","doi":"10.1007/s10514-023-10128-2","DOIUrl":"10.1007/s10514-023-10128-2","url":null,"abstract":"<div><p>To improve accuracy and robustness of interactive aerial robots, the knowledge of the forces acting on the platform is of uttermost importance. The robot should distinguish interaction forces from external disturbances in order to be compliant with the firsts and reject the seconds. This represents a challenge since disturbances might be of different nature (physical contact, aerodynamic, modeling errors) and be applied to different points of the robot. This work presents a new <span>(hbox {extended Kalman filter (EKF)})</span> based estimator for both external disturbance and interaction forces. The estimator fuses information coming from the system’s dynamic model and it’s state with wrench measurements coming from a Force-Torque sensor. This allows for robust interaction control at the tool’s tip even in presence of external disturbance wrenches acting on the platform. We employ the filter estimates in a novel hybrid force/motion controller to perform force tracking not only along the tool direction, but from any platform’s orientation, without losing the stability of the pose controller. The proposed framework is extensively tested on an omnidirectional aerial manipulator (AM) performing push and slide operations and transitioning between different interaction surfaces, while subject to external disturbances. The experiments are done equipping the AM with two different tools: a rigid interaction stick and an actuated delta manipulator, showing the generality of the approach. Moreover, the estimation results are compared to a state-of-the-art momentum-based estimator, clearly showing the superiority of the EKF approach.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1325 - 1343"},"PeriodicalIF":3.5,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10128-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43984917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-08-30DOI: 10.1007/s10514-023-10129-1
Yifan Zhou, Shubham Sonawani, Mariano Phielipp, Heni Ben Amor, Simon Stepputtis
{"title":"Learning modular language-conditioned robot policies through attention","authors":"Yifan Zhou, Shubham Sonawani, Mariano Phielipp, Heni Ben Amor, Simon Stepputtis","doi":"10.1007/s10514-023-10129-1","DOIUrl":"10.1007/s10514-023-10129-1","url":null,"abstract":"<div><p>Training language-conditioned policies is typically time-consuming and resource-intensive. Additionally, the resulting controllers are tailored to the specific robot they were trained on, making it difficult to transfer them to other robots with different dynamics. To address these challenges, we propose a new approach called Hierarchical Modularity, which enables more efficient training and subsequent transfer of such policies across different types of robots. The approach incorporates Supervised Attention which bridges the gap between modular and end-to-end learning by enabling the re-use of functional building blocks. In this contribution, we build upon our previous work, showcasing the extended utilities and improved performance by expanding the hierarchy to include new tasks and introducing an automated pipeline for synthesizing a large quantity of novel objects. We demonstrate the effectiveness of this approach through extensive simulated and real-world robot manipulation experiments.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1013 - 1033"},"PeriodicalIF":3.5,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10129-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47306198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-08-29DOI: 10.1007/s10514-023-10133-5
Yan Ding, Xiaohan Zhang, Saeid Amiri, Nieqing Cao, Hao Yang, Andy Kaminski, Chad Esselink, Shiqi Zhang
{"title":"Integrating action knowledge and LLMs for task planning and situation handling in open worlds","authors":"Yan Ding, Xiaohan Zhang, Saeid Amiri, Nieqing Cao, Hao Yang, Andy Kaminski, Chad Esselink, Shiqi Zhang","doi":"10.1007/s10514-023-10133-5","DOIUrl":"10.1007/s10514-023-10133-5","url":null,"abstract":"<div><p>Task planning systems have been developed to help robots use human knowledge (about actions) to complete long-horizon tasks. Most of them have been developed for “closed worlds” while assuming the robot is provided with complete world knowledge. However, the real world is generally open, and the robots frequently encounter unforeseen situations that can potentially break theplanner’s completeness. Could we leverage the recent advances on pre-trained Large Language Models (LLMs) to enable classical planning systems to deal with novel situations? This paper introduces a novel framework, called COWP, for open-world task planning and situation handling. COWP dynamically augments the robot’s action knowledge, including the preconditions and effects of actions, with task-oriented commonsense knowledge. COWP embraces the openness from LLMs, and is grounded to specific domains via action knowledge. For systematic evaluations, we collected a dataset that includes 1085 execution-time situations. Each situation corresponds to a state instance wherein a robot is potentially unable to complete a task using a solution that normally works. Experimental results show that our approach outperforms competitive baselines from the literature in the success rate of service tasks. Additionally, we have demonstrated COWP using a mobile manipulator. Supplementary materials are available at: https://cowplanning.github.io/</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"981 - 997"},"PeriodicalIF":3.5,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136248667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ProgPrompt: program generation for situated robot task planning using large language models","authors":"Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg","doi":"10.1007/s10514-023-10135-3","DOIUrl":"10.1007/s10514-023-10135-3","url":null,"abstract":"<div><p>Task planning can require defining myriad domain knowledge about the world in which a robot needs to act. To ameliorate that effort, large language models (LLMs) can be used to score potential next actions during task planning, and even generate action sequences directly, given an instruction in natural language with no additional domain information. However, such methods either require enumerating all possible next steps for scoring, or generate free-form text that may contain actions not possible on a given robot in its current context. We present a programmatic LLM prompt structure that enables plan generation functional across situated environments, robot capabilities, and tasks. Our key insight is to prompt the LLM with program-like specifications of the available actions and objects in an environment, as well as with example <span>programs</span> that can be executed. We make concrete recommendations about prompt structure and generation constraints through ablation experiments, demonstrate state of the art success rates in VirtualHome household tasks, and deploy our method on a physical robot arm for tabletop tasks. Website and code at progprompt.github.io</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"999 - 1012"},"PeriodicalIF":3.5,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10135-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48320797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-08-19DOI: 10.1007/s10514-023-10127-3
Álvaro Serra-Gómez, Hai Zhu, Bruno Brito, Wendelin Böhmer, Javier Alonso-Mora
{"title":"Learning scalable and efficient communication policies for multi-robot collision avoidance","authors":"Álvaro Serra-Gómez, Hai Zhu, Bruno Brito, Wendelin Böhmer, Javier Alonso-Mora","doi":"10.1007/s10514-023-10127-3","DOIUrl":"10.1007/s10514-023-10127-3","url":null,"abstract":"<div><p>Decentralized multi-robot systems typically perform coordinated motion planning by constantly broadcasting their intentions to avoid collisions. However, the risk of collision between robots varies as they move and communication may not always be needed. This paper presents an efficient communication method that addresses the problem of “when” and “with whom” to communicate in multi-robot collision avoidance scenarios. In this approach, each robot learns to reason about other robots’ states and considers the risk of future collisions before asking for the trajectory plans of other robots. We introduce a new neural architecture for the learned communication policy which allows our method to be scalable. We evaluate and verify the proposed communication strategy in simulation with up to twelve quadrotors, and present results on the zero-shot generalization/robustness capabilities of the policy in different scenarios. We demonstrate that our policy (learned in a simulated environment) can be successfully transferred to real robots.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1275 - 1297"},"PeriodicalIF":3.5,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10127-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46045076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-08-17DOI: 10.1007/s10514-023-10125-5
Alessandro Antonucci, Paolo Bevilacqua, Stefano Leonardi, Luigi Paolopoli, Daniele Fontanelli
{"title":"Humans as path-finders for mobile robots using teach-by-showing navigation","authors":"Alessandro Antonucci, Paolo Bevilacqua, Stefano Leonardi, Luigi Paolopoli, Daniele Fontanelli","doi":"10.1007/s10514-023-10125-5","DOIUrl":"10.1007/s10514-023-10125-5","url":null,"abstract":"<div><p>One of the most important barriers towards a widespread use of mobile robots in unstructured, human populated and possibly a-priori unknown work environments is the ability to plan a safe path. In this paper, we propose to delegate this activity to a human operator that walks in front of the robot marking with her/his footsteps the path to be followed. The implementation of this approach requires a high degree of robustness in locating the specific person to be followed (the <i>path-finder</i>). We propose a three phases approach to fulfil this goal: 1. Identification and tracking of the person in the image space, 2. Sensor fusion between camera data and laser sensors, 3. Point interpolation with continuous curvature paths. The approach is described in the paper and extensively validated with experimental results.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1255 - 1273"},"PeriodicalIF":3.5,"publicationDate":"2023-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10125-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43207995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}