T. Abrar, F. Putzu, A. Ataka, Hareesh Godaba, K. Althoefer
{"title":"Highly Manoeuvrable Eversion Robot Based on Fusion of Function with Structure","authors":"T. Abrar, F. Putzu, A. Ataka, Hareesh Godaba, K. Althoefer","doi":"10.1109/ICRA48506.2021.9561873","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561873","url":null,"abstract":"Despite their soft and compliant bodies, most of today’s soft robots have limitations when it comes to elongation or extension of their main structure. In contrast to this, a new type of soft robot called the eversion robot can grow longitudinally, exploiting the principle of eversion. Eversion robots can squeeze through narrow openings, giving the possibility to access places that are inaccessible by conventional robots. The main drawback of these types of robots is their limited bending capability due to the tendency to move along a straight line. In this paper, we propose a novel way to fuse bending actuation with the robot’s structure. We devise an eversion robot whose body forms both the central chamber that acts as the backbone as well as the actuators that cause bending and manoeuvre the manipulator. The proposed technique shows a significantly improved bending capability compared to externally attaching actuators to an eversion robot showing a 133% improvement in bending angle. Due to the increased manoeuvrability, the proposed solution is a step towards the employment of eversion robots in remote and difficult-to-access environments.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128646375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AXLE: Computationally-efficient trajectory smoothing using factor graph chains","authors":"Edwin Olson","doi":"10.1109/ICRA48506.2021.9561823","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561823","url":null,"abstract":"Factor graph chains– the special case of a factor graph in which there are no potentials connecting non-adjacent nodes– arise naturally in many robotics problems. Importantly, they are often part of an inner loop in trajectory optimization and estimation problems, and so applications can be very sensitive to the performance of a solver.Of course, it is well-known that factor graph chains have an O(N) solution, but an actual solution is often left as \"an exercise to the reader\"… with the inevitable consequence that few (if any) efficient solutions are readily available.In this paper, we carefully derive the solution while keeping track of the specific block structure that arises, we work through a number of practical implementation challenges, and we highlight additional optimizations that are not at first apparent. An easy-to-use and self-contained solver is provided in C, which outperforms the AprilSAM general-purpose sparse matrix factorization library by a factor of 7.3x even without specialized block operations.The name AXLE reflects the names of the key matrices involved (the approach here solves the linear problem AX = E by factoring A as LLT), while also reflecting its key application in kino-dynamic trajectory estimation of vehicles with axles.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128663025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qianli Xu, Fen Fang, Nicolas Gauthier, Wenyu Liang, Yan Wu, Liyuan Li, J. Lim
{"title":"Towards Efficient Multiview Object Detection with Adaptive Action Prediction","authors":"Qianli Xu, Fen Fang, Nicolas Gauthier, Wenyu Liang, Yan Wu, Liyuan Li, J. Lim","doi":"10.1109/ICRA48506.2021.9561388","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561388","url":null,"abstract":"Active vision is a desirable perceptual feature for robots. Existing approaches usually make strong assumptions about the task and environment, thus are less robust and efficient. This study proposes an adaptive view planning approach to boost the efficiency and robustness of active object detection. We formulate the multi-object detection task as an active multiview object detection problem given the initial location of the objects. Next, we propose a novel adaptive action prediction (A2P) method built on a deep Q-learning network with a dueling architecture. The A2P method is able to perform view planning based on visual information of multiple objects; and adjust action ranges according to the task status. Evaluated on the AVD dataset, A2P leads to 21.9% increase in detection accuracy in unfamiliar environments, while improving efficiency by 22.7%. On the T-LESS dataset, multi-object detection boosts efficiency by more than 30% while achieving equivalent detection accuracy.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129167770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Mapping of Tailored Landmark Representations for Automated Driving and Map Learning","authors":"Jan-Hendrik Pauls, Benjamin Schmidt, C. Stiller","doi":"10.1109/ICRA48506.2021.9561432","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561432","url":null,"abstract":"While the automatic creation of maps for localization is a widely tackled problem, the automatic inference of higher layers of HD maps is not. Additionally, approaches that learn from maps require richer and more precise landmarks than currently available.In this work, we fuse semantic detections from a monocular camera with depth and orientation estimation from lidar to automatically detect, track and map parametric, semantic map elements. We propose the use of tailored representations that are minimal in the number of parameters, making the map compact and the estimation robust and precise enough to enable map inference even from single frame detections. As examples, we map traffic signs, traffic lights and poles using upright rectangles and cylinders.After robust multi-view optimization, traffic lights and signs have a mean absolute position error of below 10 cm, extent estimates are below 5 cm and orientation MAE is below 6◦. This proves the suitability as automatically generated, pixel-accurate ground truth, reducing the task of ground truth generation from tedious 3D annotation to a post-processing of misdetections.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124708104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paloma Sodhi, M. Kaess, Mustafa Mukadam, Stuart Anderson
{"title":"Learning Tactile Models for Factor Graph-based Estimation","authors":"Paloma Sodhi, M. Kaess, Mustafa Mukadam, Stuart Anderson","doi":"10.1109/ICRA48506.2021.9561011","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561011","url":null,"abstract":"We’re interested in the problem of estimating object states from touch during manipulation under occlusions. In this work, we address the problem of estimating object poses from touch during planar pushing. Vision-based tactile sensors provide rich, local image measurements at the point of contact. A single such measurement, however, contains limited information and multiple measurements are needed to infer latent object state. We solve this inference problem using a factor graph. In order to incorporate tactile measurements in the graph, we need local observation models that can map highdimensional tactile images onto a low-dimensional state space. Prior work has used low-dimensional force measurements or engineered functions to interpret tactile measurements. These methods, however, can be brittle and difficult to scale across objects and sensors. Our key insight is to directly learn tactile observation models that predict the relative pose of the sensor given a pair of tactile images. These relative poses can then be incorporated as factors within a factor graph. We propose a two-stage approach: first we learn local tactile observation models supervised with ground truth data, and then integrate these models along with physics and geometric factors within a factor graph optimizer. We demonstrate reliable object tracking using only tactile feedback for ~150 real-world planar pushing sequences with varying trajectories across three object shapes.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124784765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two-Stage Trajectory Optimization for Flapping Flight with Data-Driven Models","authors":"J. Hoff, Joohyung Kim","doi":"10.1109/ICRA48506.2021.9561752","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561752","url":null,"abstract":"Underactuated robots often require involved routines for trajectory planning due to their complex dynamics. Flapping-wing aerial vehicles have unsteady aerodynamics and periodic gaits that complicate the planning procedure. In this paper, we improve upon existing methods for flight planning by introducing a two-stage optimization routine to plan flapping flight trajectories. The first stage solves a trajectory optimization problem with a data-driven fixed-wing approximation model trained with experimental flight data. The solution to this is used as the initial guess for a second stage optimization using a flapping-wing model trained with the same flight data. We demonstrate the effectiveness of this approach with a bat robot in both simulation and experimental flight results. The speed of convergence, the dependency on the initial guess, and the quality of the solution are improved, and the robot is able to track the optimized trajectory of a dive maneuver.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126841688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angelique Taylor, S. Matsumoto, Wesley Xiao, L. Riek
{"title":"Social Navigation for Mobile Robots in the Emergency Department","authors":"Angelique Taylor, S. Matsumoto, Wesley Xiao, L. Riek","doi":"10.1109/ICRA48506.2021.9561897","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561897","url":null,"abstract":"The emergency department (ED) is a safety-critical environment in which healthcare workers (HCWs) are overburdened, overworked, and have limited resources, especially during the COVID-19 pandemic. One way to address this problem is to explore the use of robots that can support clinical teams, e.g., to deliver materials or restock supplies. However, due to EDs being overcrowded, and the cognitive overload HCWs experience, robots need to understand various levels of patient acuity so they avoid disrupting care delivery. In this paper, we introduce the Safety-Critical Deep Q-Network (SafeDQN) system, a new acuity-aware navigation system for mobile robots. SafeDQN is based on two insights about care in EDs: high-acuity patients tend to have more HCWs in attendance and those HCWs tend to move more quickly. We compared SafeDQN to three classic navigation methods, and show that it generates the safest, quickest path for mobile robots when navigating in a simulated ED environment. We hope this work encourages future exploration of social robots that work in safety-critical, human-centered environments, and ultimately help to improve patient outcomes and save lives.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126905952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PuzzleBots: Physical Coupling of Robot Swarms","authors":"Sha Yi, Zeynep Temel, K. Sycara","doi":"10.1109/ICRA48506.2021.9561610","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561610","url":null,"abstract":"Robot swarms have been shown to improve the ability of individual robots by inter-robot collaboration. In this paper, we present the PuzzleBots - a low-cost robotic swarm system where robots can physically couple with each other to form functional structures with minimum energy consumption while maintaining individual mobility to navigate within the environment. Each robot has knobs and holes along the sides of its body so that the robots can couple by inserting the knobs into the holes. We present the characterization of knob design and the result of gap-crossing behavior with up to nine robots. We show with hardware experiments that the robots are able to couple with each other to cross gaps and decouple to perform individual tasks. We anticipate the PuzzleBots will be useful in unstructured environments as individuals and coupled systems in real-world applications.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127043732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Monocular Visual-Inertial Depth Completion for Embedded Systems","authors":"Nate Merrill, Patrick Geneva, G. Huang","doi":"10.1109/ICRA48506.2021.9561174","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561174","url":null,"abstract":"In this work we augment our prior state-of-the-art visual-inertial odometry (VIO) system, OpenVINS [1], to produce accurate dense depth by filling in sparse depth estimates (depth completion) from VIO with image guidance – all while focusing on enabling real-time performance of the full VIO+depth system on embedded devices. We show that noisy depth values with varying sparsity produced from a VIO system can not only hurt the accuracy of predicted dense depth maps, but also make them considerably worse than those from an image-only depth network with the same underlying architecture. We investigate this sensitivity on both an outdoor simulated and indoor handheld RGB-D dataset, and present simple yet effective solutions to address these shortcomings of depth completion networks. The key changes to our state-of-the-art VIO system required to provide high quality sparse depths for the network while still enabling efficient state estimation on embedded devices are discussed. A comprehensive computational analysis is performed over different embedded devices to demonstrate the efficiency and accuracy of the proposed VIO depth completion system.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129201894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time Friction Estimation for Grip Force Control","authors":"Heba Khamis, Benjamin Xia, S. Redmond","doi":"10.1109/ICRA48506.2021.9561640","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561640","url":null,"abstract":"An important capability of humans when performing dexterous precision gripping tasks is our ability to feel both the weight and slipperiness of an object in real-time, and adjust our grip force accordingly. In this paper, we present for the first time a fully-instrumented version of our PapillArray tactile sensor concept, which can sense grip force, object weight, and incipient slip and friction, all in real-time. We demonstrate the real-time estimation of friction and measurement of 3D force from PapillArray sensors mounted on each finger of a two-finger gripper, combined with a closed-loop grip-force control algorithm that dynamically applies a near-optimal grip force to avoid dropping objects of varying weight and friction. A vertical lifting task was performed using an object with varying weight and friction, and with some common household items. After intentionally adding a 20% safety margin on the target grip force, the actual grip force applied was only 9-30 % greater than that required to avoid slip. Future work will focus on incorporating real-time torque measurement into the grip force feedback control. This will significantly advance the state-of-the-art in artificial tactile sensing and bring us closer to robotic dexterity.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129229367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}