{"title":"Reducing Uncertainty in Pose Estimation under Complex Contacts via Force Forecast","authors":"Huitan Mao, J. Xiao","doi":"10.1109/ICRA40945.2020.9197190","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197190","url":null,"abstract":"How to reduce uncertainty in object pose estimation under complex contacts is crucial to autonomous robotic manipulation and assembly. In this paper, we introduce an approach through forecasting contact force from simulated complex contacts with calibration based on real force sensing. A constraint-based haptic simulation algorithm is used with sphere-tree representation of contacting objects to compute contact poses and forces, and through matching the computed forces to measured real force data via a regression model, the least-uncertain estimate of the relative contact pose is obtained. Our approach can handle multi-region complex contacts and does not make any assumption about contact types or contact locations. It also does not have restriction on object shapes. We have applied the force forecast approach to reducing uncertainty in estimating object poses in challenging peg-in-hole robotic assembly tasks and demonstrate the effectiveness of the approach by successful completion of contact-rich two-pin and three-pin real peg-in-hole assembly tasks with complex shapes of pins and holes.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"2 4 1","pages":"2661-2667"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78335647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Bilevel Optimization for UAV Time-Optimal Trajectory using a Duality Gap Approach","authors":"Gao Tang, Weidong Sun, Kris K. Hauser","doi":"10.1109/ICRA40945.2020.9196789","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196789","url":null,"abstract":"Time-optimal trajectories for dynamic robotic vehicles are difficult to compute even for state-of-the-art nonlinear programming (NLP) solvers, due to nonlinearity and bang-bang control structure. This paper presents a bilevel optimization framework that addresses these problems by decomposing the spatial and temporal variables into a hierarchical optimization. Specifically, the original problem is divided into an inner layer, which computes a time-optimal velocity profile along a given geometric path, and an outer layer, which refines the geometric path by a Quasi-Newton method. The inner optimization is convex and efficiently solved by interior-point methods. The gradients of the outer layer can be analytically obtained using sensitivity analysis of parametric optimization problems. A novel contribution is to introduce a duality gap in the inner optimization rather than solving it to optimality; this lets the optimizer realize warm-starting of the interior-point method, avoids non-smoothness of the outer cost function caused by active inequality constraint switching. Like prior bilevel frameworks, this method is guaranteed to return a feasible solution at any time, but converges faster than gap-free bilevel optimization. Numerical experiments on a drone model with velocity and acceleration limits show that the proposed method performs faster and more robustly than gap-free bilevel optimization and general NLP solvers.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"563 1","pages":"2515-2521"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77767681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin Eckenhoff, Patrick Geneva, Nate Merrill, G. Huang
{"title":"Schmidt-EKF-based Visual-Inertial Moving Object Tracking","authors":"Kevin Eckenhoff, Patrick Geneva, Nate Merrill, G. Huang","doi":"10.1109/ICRA40945.2020.9197352","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197352","url":null,"abstract":"In this paper we investigate the effect of tightly-coupled estimation on the performance of visual-inertial localization and dynamic object pose tracking. In particular, we show that while a joint estimation system outperforms its decoupled counterpart when given a \"proper\" model for the target’s motion, inconsistent modeling, such as choosing improper levels for the target’s propagation noises, can actually lead to a degradation in ego-motion accuracy. To address the realistic scenario where a good prior knowledge of the target’s motion model is not available, we design a new system based on the Schmidt-Kalman Filter (SKF), in which target measurements do not update the navigation states, however all correlations are still properly tracked. This allows for both consistent modeling of the target errors and the ability to update target estimates whenever the tracking sensor receives non-target data such as bearing measurements to static, 3D environmental features. We show in extensive simulation that this system, along with a robot-centric representation of the target, leads to robust estimation performance even in the presence of an inconsistent target motion model. Finally, the system is validated in a real-world experiment, and is shown to offer accurate localization and object pose tracking performance.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"651-657"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79890133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Material Handling by Humanoid Robot While Pushing Carts Using a Walking Pattern Based on Capture Point","authors":"Jean Chagas Vaz, P. Oh","doi":"10.1109/ICRA40945.2020.9196872","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196872","url":null,"abstract":"This paper presents a study that evaluates the effects on the walking pattern of a full-sized humanoid robot as it pushes different carts. Furthermore, it discuss a modified Zero Moment Point (ZMP) pattern based on a capture point method, and a friction compensation method for the arms. Humanoid researchers have demonstrated that robots can perform a wide range of tasks including handling tools, climbing ladders, and patrolling rough terrain. However, when it comes to handling objects while walking, humanoids are relatively limited; it becomes more apparent when humanoids have to push a cart. Many challenges become evident under such circumstances; for example, the walking pattern will be severely affected by the external force opposed by the cart. Therefore, an appropriate walking pattern dynamic model and arm compliance are needed to mitigate external forces. This becomes crucial in order to ensure the robot’s self-balance and minimize external disturbances.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"40 1","pages":"9796-9801"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80372120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhongtao Fu, Emmanouil Spyrakos-Papastavridis, Yen-hua Lin, J. Dai
{"title":"Analytical Expressions of Serial Manipulator Jacobians and their High-Order Derivatives based on Lie Theory*","authors":"Zhongtao Fu, Emmanouil Spyrakos-Papastavridis, Yen-hua Lin, J. Dai","doi":"10.1109/ICRA40945.2020.9197131","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197131","url":null,"abstract":"Serial manipulator kinematics provide a mapping between joint variables in joint-space coordinates, and end-effector configurations in task-space Cartesian coordinates. Velocity mappings are represented via the manipulator Jacobian produced by direct differentiation of the forward kinematics. Acquisition of acceleration, jerk, and snap expressions, typically utilized for accurate trajectory-tracking, requires the computation of high-order Jacobian derivatives. As compared to conventional numerical/D-H approaches, this paper proposes a novel methodology to derive the Jacobians and their high-order derivatives symbolically, based on Lie theory, which requires that the derivatives are calculated with respect to each joint variable and time. Additionally, the technique described herein yields a mathematically sound solution to the high-order Jacobian derivatives, which distinguishes it from other relevant works. Performing computations with respect to the two inertial-fixed and body-fixed frames, the analytical form of the spatial and body Jacobians are derived, as well as their higher-order derivatives, without resorting to any approximations, whose expressions would depend explicitly on the joint state and the choice of reference frames. The proposed method provides more tractable computation of higher-order Jacobian derivatives, while its effectiveness has been verified by conducting a comparative analysis based on experimental data extracted from a KUKA LRB iiwa7 R800 manipulator.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"43 1","pages":"7095-7100"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76782613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MagicHand: Context-Aware Dexterous Grasping Using an Anthropomorphic Robotic Hand","authors":"Hui Li, Jindong Tan, Hongsheng He","doi":"10.1109/ICRA40945.2020.9196538","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196538","url":null,"abstract":"Understanding of characteristics of objects such as fragility, rigidity, texture and dimensions facilitates and innovates robotic grasping. In this paper, we propose a context- aware anthropomorphic robotic hand (MagicHand) grasping system which is able to gather various information about its target object and generate grasping strategies based on the perceived information. In this work, NIR spectra of target objects are perceived to recognize materials on a molecular level and RGB-D images are collected to estimate dimensions of the objects. We selected six most used grasping poses and our system is able to decide the most suitable grasp strategies based on the characteristics of an object. Through multiple experiments, the performance of the MagicHand system is demonstrated.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"45 1","pages":"9895-9901"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77191337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Framework for Interactive Virtual Fixture Generation for Shared Teleoperation in Unstructured Environments","authors":"Vitalii Pruks, J. Ryu","doi":"10.1109/ICRA40945.2020.9196579","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196579","url":null,"abstract":"Virtual fixtures (VFs) improve human operator performance in teleoperation scenarios. However, the generation of VFs is challenging, especially in unstructured environments. In this work, we introduce a framework for the interactive generation of VF. The method is based on the observation that a human can easily understand just by looking at the remote environment which VF could help in task execution. We propose a user interface that detects features on camera images and permits interactive selection of the features. We demonstrate how the feature selection can be used for designing VF, providing 6-DOF haptic feedback. In order to make the proposed framework more generally applicable to a wider variety of applications, we formalize the process of virtual fixture generation (VFG) into the specification of features, geometric primitives, and constraints. The framework can be extended further by the introduction of additional components. Through the human subject study, we demonstrate the proposed framework is intuitive, easy to use while effective, especially for performing hard contact tasks.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"62 1","pages":"10234-10241"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82542262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stability Criteria of Balanced and Steppable Unbalanced States for Full-Body Systems with Implications in Robotic and Human Gait","authors":"William Z. Peng, Carlotta Mummolo, Joo H. Kim","doi":"10.1109/ICRA40945.2020.9196820","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196820","url":null,"abstract":"Biped walking involves a series of transitions between single support (SS) and double support (DS) contact configurations that can include both balanced and unbalanced states. The new concept of steppability is introduced to partition the set of unbalanced states into steppable states and falling (unsteppable) states based on the ability of a biped system to respond to forward velocity perturbations by stepping. In this work, a complete system-specific analysis of the stepping process including full-order nonlinear system dynamics is presented for the DARwIn-OP humanoid robot and a human subject in the sagittal plane with respect to both balance stability and steppability. The balance stability and steppability of each system are analyzed by numerical construction of its balance stability boundaries (BSB) for the initial SS and final DS contact configuration and the steppable unbalanced state boundary (SUB). These results are presented with center of mass (COM) trajectories obtained from walking experiments to benchmark robot controller performance and analyze the variation of balance stability and steppability with COM and swing foot position along the progression of a step cycle. For each system, DS BSBs were obtained with both constrained and unconstrained arms in order to demonstrate the ability of this approach to incorporate the effects of angular momentum and system-specific characteristics such as actuation torque, velocity, and angle limits.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"28 1","pages":"9789-9795"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82575846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Practical Persistence Reasoning in Visual SLAM","authors":"Z. S. Hashemifar, Karthik Dantu","doi":"10.1109/ICRA40945.2020.9196913","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196913","url":null,"abstract":"Many existing SLAM approaches rely on the assumption of static environments for accurate performance. However, several robot applications require them to traverse repeatedly in semi-static or dynamic environments. There has been some recent research interest in designing persistence filters to reason about persistence in such scenarios. Our goal in this work is to incorporate such persistence reasoning in visual SLAM. To this end, we incorporate persistence filters [1] into ORB-SLAM, a well-known visual SLAM algorithm. We observe that the simple integration of their proposal results in inefficient persistence reasoning. Through a series of modifications and using two locally collected datasets, we demonstrate the utility of such persistence filtering as well as our customizations in ORB-SLAM. Overall, incorporating persistence filtering could result in a significant reduction in map size (about 30% in the best case) and a corresponding reduction in run-time while retaining similar accuracy to methods that use much larger maps.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"17 1","pages":"7307-7313"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81401788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Highly Parallelizable Plane Extraction for Organized Point Clouds Using Spherical Convex Hulls","authors":"Hannes Möls, Kailai Li, U. Hanebeck","doi":"10.1109/ICRA40945.2020.9197139","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197139","url":null,"abstract":"We present a novel region growing algorithm for plane extraction of organized point clouds using the spherical convex hull. Instead of explicit plane parameterization, our approach interprets potential underlying planes as a series of geometric constraints on the sphere that are refined during region growing. Unlike existing schemes relying on downsampling for sequential execution in real time, our approach enables pixelwise plane extraction that is highly parallelizable. We further test the proposed approach with a fully parallel implementation on a GPU. Evaluation based on public data sets has shown state-of-the-art extraction accuracy and superior speed compared to existing approaches, while guaranteeing real-time processing at full input resolution of a typical RGB-D camera.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"41 1","pages":"7920-7926"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78667209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}