H. Blum, M. Müller, A. Gawel, R. Siegwart, César Cadena
{"title":"SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World Semantic Scene Understanding","authors":"H. Blum, M. Müller, A. Gawel, R. Siegwart, César Cadena","doi":"10.48550/arXiv.2206.10670","DOIUrl":"https://doi.org/10.48550/arXiv.2206.10670","url":null,"abstract":"In order to operate in human environments, a robot's semantic perception has to overcome open-world challenges such as novel objects and domain gaps. Autonomous deployment to such environments therefore requires robots to update their knowledge and learn without supervision. We investigate how a robot can autonomously discover novel semantic classes and improve accuracy on known classes when exploring an unknown environment. To this end, we develop a general framework for mapping and clustering that we then use to generate a self-supervised learning signal to update a semantic segmentation model. In particular, we show how clustering parameters can be optimized during deployment and that fusion of multiple observation modalities improves novel object discovery compared to prior work. Models, data, and implementations can be found at https://github.com/hermannsblum/scim","PeriodicalId":136210,"journal":{"name":"International Symposium of Robotics Research","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131295129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lawrence Yunliang Chen, Huang Huang, Ellen R. Novoseller, Daniel Seita, Jeffrey Ichnowski, Michael Laskey, Richard Cheng, T. Kollar, Ken Goldberg
{"title":"Efficiently Learning Single-Arm Fling Motions to Smooth Garments","authors":"Lawrence Yunliang Chen, Huang Huang, Ellen R. Novoseller, Daniel Seita, Jeffrey Ichnowski, Michael Laskey, Richard Cheng, T. Kollar, Ken Goldberg","doi":"10.48550/arXiv.2206.08921","DOIUrl":"https://doi.org/10.48550/arXiv.2206.08921","url":null,"abstract":"Recent work has shown that 2-arm\"fling\"motions can be effective for garment smoothing. We consider single-arm fling motions. Unlike 2-arm fling motions, which require little robot trajectory parameter tuning, single-arm fling motions are very sensitive to trajectory parameters. We consider a single 6-DOF robot arm that learns fling trajectories to achieve high garment coverage. Given a garment grasp point, the robot explores different parameterized fling trajectories in physical experiments. To improve learning efficiency, we propose a coarse-to-fine learning method that first uses a multi-armed bandit (MAB) framework to efficiently find a candidate fling action, which it then refines via a continuous optimization method. Further, we propose novel training and execution-time stopping criteria based on fling outcome uncertainty; the training-time stopping criterion increases data efficiency while the execution-time stopping criteria leverage repeated fling actions to increase performance. Compared to baselines, the proposed method significantly accelerates learning. Moreover, with prior experience on similar garments collected through self-supervision, the MAB learning time for a new garment is reduced by up to 87%. We evaluate on 36 real garments: towels, T-shirts, long-sleeve shirts, dresses, sweat pants, and jeans. Results suggest that using prior experience, a robot requires under 30 minutes to learn a fling action for a novel garment that achieves 60-94% coverage.","PeriodicalId":136210,"journal":{"name":"International Symposium of Robotics Research","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124033521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wisdom C. Agboh, Jeffrey Ichnowski, Ken Goldberg, M. Dogar
{"title":"Multi-Object Grasping in the Plane","authors":"Wisdom C. Agboh, Jeffrey Ichnowski, Ken Goldberg, M. Dogar","doi":"10.48550/arXiv.2206.00229","DOIUrl":"https://doi.org/10.48550/arXiv.2206.00229","url":null,"abstract":"We consider a novel problem where multiple rigid convex polygonal objects rest in randomly placed positions and orientations on a planar surface visible from an overhead camera. The objective is to efficiently grasp and transport all objects into a bin using multi-object push-grasps, where multiple objects are pushed together to facilitate multi-object grasping. We provide necessary conditions for frictionless multi-object push-grasps and apply these to filter inadmissible grasps in a novel multi-object grasp planner. We find that our planner is 19 times faster than a Mujoco simulator baseline. We also propose a picking algorithm that uses both single- and multi-object grasps to pick objects. In physical grasping experiments comparing performance with a single-object picking baseline, we find that the frictionless multi-object grasping system achieves 13.6% higher grasp success and is 59.9% faster, from 212 PPH to 340 PPH. See url{https://sites.google.com/view/multi-object-grasping} for videos and code.","PeriodicalId":136210,"journal":{"name":"International Symposium of Robotics Research","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130063153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dian Wang, Colin Kohler, Xu Zhu, Ming Jia, Robert W. Platt
{"title":"BulletArm: An Open-Source Robotic Manipulation Benchmark and Learning Framework","authors":"Dian Wang, Colin Kohler, Xu Zhu, Ming Jia, Robert W. Platt","doi":"10.48550/arXiv.2205.14292","DOIUrl":"https://doi.org/10.48550/arXiv.2205.14292","url":null,"abstract":"We present BulletArm, a novel benchmark and learning-environment for robotic manipulation. BulletArm is designed around two key principles: reproducibility and extensibility. We aim to encourage more direct comparisons between robotic learning methods by providing a set of standardized benchmark tasks in simulation alongside a collection of baseline algorithms. The framework consists of 31 different manipulation tasks of varying difficulty, ranging from simple reaching and picking tasks to more realistic tasks such as bin packing and pallet stacking. In addition to the provided tasks, BulletArm has been built to facilitate easy expansion and provides a suite of tools to assist users when adding new tasks to the framework. Moreover, we introduce a set of five benchmarks and evaluate them using a series of state-of-the-art baseline algorithms. By including these algorithms as part of our framework, we hope to encourage users to benchmark their work on any new tasks against these baselines.","PeriodicalId":136210,"journal":{"name":"International Symposium of Robotics Research","volume":"40 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125747142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmad Gazar, M. Khadiv, Sébastien Kleff, A. Prete, L. Righetti
{"title":"Nonlinear Stochastic Trajectory Optimization for Centroidal Momentum Motion Generation of Legged Robots","authors":"Ahmad Gazar, M. Khadiv, Sébastien Kleff, A. Prete, L. Righetti","doi":"10.48550/arXiv.2205.13264","DOIUrl":"https://doi.org/10.48550/arXiv.2205.13264","url":null,"abstract":"Generation of robust trajectories for legged robots remains a challenging task due to the underlying nonlinear, hybrid and intrinsically unstable dynamics which needs to be stabilized through limited contact forces. Furthermore, disturbances arising from unmodelled contact interactions with the environment and model mismatches can hinder the quality of the planned trajectories leading to unsafe motions. In this work, we propose to use stochastic trajectory optimization for generating robust centroidal momentum trajectories to account for additive uncertainties on the model dynamics and parametric uncertainties on contact locations. Through an alternation between the robust centroidal and whole-body trajectory optimizations, we generate robust momentum trajectories while being consistent with the whole-body dynamics. We perform an extensive set of simulations subject to different uncertainties on a quadruped robot showing that our stochastic trajectory optimization problem reduces the amount of foot slippage for different gaits while achieving better performance over deterministic planning.","PeriodicalId":136210,"journal":{"name":"International Symposium of Robotics Research","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133964078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dario Tscholl, Stephan-Daniel Gravert, Aurel X. Appius, Robert K. Katzschmann
{"title":"Flying Hydraulically Amplified Electrostatic Gripper System for Aerial Object Manipulation","authors":"Dario Tscholl, Stephan-Daniel Gravert, Aurel X. Appius, Robert K. Katzschmann","doi":"10.48550/arXiv.2205.13011","DOIUrl":"https://doi.org/10.48550/arXiv.2205.13011","url":null,"abstract":"Rapid and versatile object manipulation in air is an open challenge. An energy-efficient and adaptive soft gripper combined with an agile aerial vehicle could revolutionize aerial robotic manipulation in areas such as warehousing. This paper presents a bio-inspired gripper powered by hydraulically amplified electrostatic actuators mounted to a quadcopter that can interact safely and naturally with its environment. Our gripping concept is motivated by an eagle's foot. Our custom multi-actuator concept is inspired by a scorpion tail design (consisting of a base electrode with pouches stacked adjacently) and spider-inspired joints (classic pouch motors with a flexible hinge layer). A hybrid of these two designs realizes a higher force output under moderate deflections of up to 25{deg} compared to single-hinge concepts. In addition, sandwiching the hinge layer improves the robustness of the gripper. For the first time, we show that soft manipulation in air is possible using electrostatic actuation. This study demonstrates the potential of untethered hydraulically amplified actuators in aerial robotic manipulation. Our proof of concept opens up the use of hydraulic electrostatic actuators in mobile aerial systems.","PeriodicalId":136210,"journal":{"name":"International Symposium of Robotics Research","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132585710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Safe, Occlusion-Aware Manipulation for Online Object Reconstruction in Confined Spaces","authors":"Yinglong Miao, Rui Wang, Kostas E. Bekris","doi":"10.48550/arXiv.2205.11719","DOIUrl":"https://doi.org/10.48550/arXiv.2205.11719","url":null,"abstract":"Recent work in robotic manipulation focuses on object retrieval in cluttered spaces under occlusion. Nevertheless, the majority of efforts lack an analysis of conditions for the completeness of the approaches or the methods apply only when objects can be removed from the workspace. This work formulates the general, occlusion-aware manipulation task, and focuses on safe object reconstruction in a confined space with in-place rearrangement. It proposes a framework that ensures safety with completeness guarantees. Furthermore, an algorithm, which is an instantiation of this abstract framework for monotone instances is developed and evaluated empirically by comparing against a random and a greedy baseline on randomly generated experiments in simulation. Even for cluttered scenes with realistic objects, the proposed algorithm significantly outperforms the baselines and maintains a high success rate across experimental conditions.","PeriodicalId":136210,"journal":{"name":"International Symposium of Robotics Research","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134271853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Schmalstieg, Daniel Honerkamp, T. Welschehold, A. Valada
{"title":"Learning Long-Horizon Robot Exploration Strategies for Multi-Object Search in Continuous Action Spaces","authors":"F. Schmalstieg, Daniel Honerkamp, T. Welschehold, A. Valada","doi":"10.48550/arXiv.2205.11384","DOIUrl":"https://doi.org/10.48550/arXiv.2205.11384","url":null,"abstract":"Recent advances in vision-based navigation and exploration have shown impressive capabilities in photorealistic indoor environments. However, these methods still struggle with long-horizon tasks and require large amounts of data to generalize to unseen environments. In this work, we present a novel reinforcement learning approach for multi-object search that combines short-term and long-term reasoning in a single model while avoiding the complexities arising from hierarchical structures. In contrast to existing multi-object search methods that act in granular discrete action spaces, our approach achieves exceptional performance in continuous action spaces. We perform extensive experiments and show that it generalizes to unseen apartment environments with limited data. Furthermore, we demonstrate zero-shot transfer of the learned policies to an office environment in real world experiments.","PeriodicalId":136210,"journal":{"name":"International Symposium of Robotics Research","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121494758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taylor A. Howell, Simon Le Cleac'h, K. Tracy, Zachary Manchester
{"title":"CALIPSO: A Differentiable Solver for Trajectory Optimization with Conic and Complementarity Constraints","authors":"Taylor A. Howell, Simon Le Cleac'h, K. Tracy, Zachary Manchester","doi":"10.48550/arXiv.2205.09255","DOIUrl":"https://doi.org/10.48550/arXiv.2205.09255","url":null,"abstract":"We present a new solver for non-convex trajectory optimization problems that is specialized for robotics applications. CALIPSO, or the Conic Augmented Lagrangian Interior-Point SOlver, combines several strategies for constrained numerical optimization to natively handle second-order cones and complementarity constraints. It reliably solves challenging motion-planning problems that include contact-implicit formulations of impacts and Coulomb friction and state-triggered constraints where general-purpose non-convex solvers like SNOPT and Ipopt fail to converge. Additionally, CALIPSO supports efficient differentiation of solutions with respect to problem data, enabling bi-level optimization applications like auto-tuning of feedback policies. Reliable convergence of the solver is demonstrated on a range of problems from manipulation, locomotion, and aerospace domains. An open-source implementation of this solver is available.","PeriodicalId":136210,"journal":{"name":"International Symposium of Robotics Research","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115650195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Wu, T. Lew, Kiril Solovey, E. Schmerling, M. Pavone
{"title":"Robust-RRT: Probabilistically-Complete Motion Planning for Uncertain Nonlinear Systems","authors":"A. Wu, T. Lew, Kiril Solovey, E. Schmerling, M. Pavone","doi":"10.48550/arXiv.2205.07728","DOIUrl":"https://doi.org/10.48550/arXiv.2205.07728","url":null,"abstract":"Robust motion planning entails computing a global motion plan that is safe under all possible uncertainty realizations, be it in the system dynamics, the robot's initial position, or with respect to external disturbances. Current approaches for robust motion planning either lack theoretical guarantees, or make restrictive assumptions on the system dynamics and uncertainty distributions. In this paper, we address these limitations by proposing the robust rapidly-exploring random-tree (Robust-RRT) algorithm, which integrates forward reachability analysis directly into sampling-based control trajectory synthesis. We prove that Robust-RRT is probabilistically complete (PC) for nonlinear Lipschitz continuous dynamical systems with bounded uncertainty. In other words, Robust-RRT eventually finds a robust motion plan that is feasible under all possible uncertainty realizations assuming such a plan exists. Our analysis applies even to unstable systems that admit only short-horizon feasible plans; this is because we explicitly consider the time evolution of reachable sets along control trajectories. Thanks to the explicit consideration of time dependency in our analysis, PC applies to unstabilizable systems. To the best of our knowledge, this is the most general PC proof for robust sampling-based motion planning, in terms of the types of uncertainties and dynamical systems it can handle. Considering that an exact computation of reachable sets can be computationally expensive for some dynamical systems, we incorporate sampling-based reachability analysis into Robust-RRT and demonstrate our robust planner on nonlinear, underactuated, and hybrid systems.","PeriodicalId":136210,"journal":{"name":"International Symposium of Robotics Research","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123545061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}