{"title":"A Novel Variable Resolution Torque Sensor Based on Variable Stiffness Principle","authors":"Xiantao Sun, Wenjie Chen, Jianbin Zhang, Jianhua Wang, Jun Jiang, Weihai Chen","doi":"10.1109/ICRA48506.2021.9561978","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561978","url":null,"abstract":"High resolution and large range force/torque (F/T) measurements are usually required in many engineering tasks. However, most existing F/T sensors only have a fixed resolution over their whole ranges. The key lies in that it is difficult to well balance high resolution and large range in the sensor design. Taking the torque sensor for example, this paper presents a better compromise for this problem i.e., a novel variable resolution torque sensor based on variable stiffness principle. From the structural points of view, the sensor is constructed with multiple radial flexures to achieve a pure rotational motion with negligible parasitic center motions. Two resistive strain gauges (RSGs) are selected as the measuring units of the sensor to detect the applied external torque and meanwhile provide variable resolutions in the two different measuring ranges (each RSG for one range). Static and dynamic models of the sensor are established in details and validated through finite element analysis (FEA) to evaluate its characteristics. A principle prototype is finally fabricated and tested to verify the effectiveness of the presented design. RSGs are calibrated through a commercial six-axis F/T sensor from ATI Industrial Automation, Inc. Experimental results show that the torque sensor can provide high and low resolutions in the small and large ranges respectively and possesses the first natural frequency of 67.3 Hz. In addition, the proposed variable resolution method can also be applied to the development of multi-axis F/T sensors.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130762666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Penglei Liu, Qieshi Zhang, Jin Zhang, Fei Wang, Jun Cheng
{"title":"MFPN-6D : Real-time One-stage Pose Estimation of Objects on RGB Images","authors":"Penglei Liu, Qieshi Zhang, Jin Zhang, Fei Wang, Jun Cheng","doi":"10.1109/ICRA48506.2021.9561878","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561878","url":null,"abstract":"6D pose estimation of objects is an important part of robot grasping. The latest research trend on 6D pose estimation is to train a deep neural network to directly predict the 2D projection position of the 3D key points from the image, establish the corresponding relationship, and finally use Pespective-n-Point (PnP) algorithm performs pose estimation. The current challenge of pose estimation is that when the object texture-less, occluded and scene clutter, the detection accuracy will be reduced, and most of the existing algorithm models are large and cannot take the real-time requirements. In this paper, we introduce a Multi-directional Feature Pyramid Network, MFPN, which can efficiently integrate and utilize features. We combined the Cross Stage Partial Network (CSPNet) with MFPN to design a new network for 6D pose estimation, MFPN-6D. At the same time, we propose a new confidence calculation method for object pose estimation, which can fully consider spatial information and plane information. At last, we tested our method on the LINEMOD and Occluded-LINEMOD datasets. The experimental results demonstrate that our algorithm is robust to textureless materials and occlusion, while running more efficiently compared to other methods.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131173489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keya Ghonasgi, Chad G. Rose, A. D. Oliveira, Rohit John Varghese, A. Deshpande
{"title":"Design and Validation of a Novel Exoskeleton Hand Interface: The Eminence Grip","authors":"Keya Ghonasgi, Chad G. Rose, A. D. Oliveira, Rohit John Varghese, A. Deshpande","doi":"10.1109/ICRA48506.2021.9561744","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561744","url":null,"abstract":"How best to attach exoskeletons to human limbs is an open and understudied problem. In the case of upperbody exoskeletons, cylindrical handles are commonly used attachments due to ease of use and cost effectiveness. However, handles require active grip strength from the user and may result in undesirable flexion synergy stimulation, thus limiting the robot’s effectiveness. This paper presents a new design, the Eminence Grip, for attaching an exoskeleton to the hand while avoiding the undesirable consequences of using a handle. The ergonomic design uses inverse impedance matching and does not require active effort from the user to remain interfaced with the exoskeleton. We compare the performance of the Eminence Grip to the handle design in a healthy subject target reaching experiment. The results show that the Eminence Grip achieves similar performance to a handle in terms of relative motion between the user and the exoskeleton while eliminating the requirement of grip force to transfer loads to/from the exoskeleton and avoiding stimulation of the flexion synergy. Taken together, the kinematic equivalence and improvement in ergonomics suggest that the Eminence Grip is a promising exoskeleton-hand attachment interface supporting further experiments with impaired populations.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130728968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predictive Runtime Monitoring for Mobile Robots using Logic-Based Bayesian Intent Inference","authors":"Han-Ul Yoon, S. Sankaranarayanan","doi":"10.1109/ICRA48506.2021.9561193","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561193","url":null,"abstract":"We propose a predictive runtime monitoring framework that forecasts the distribution of future positions of mobile robots in order to detect and avoid impending property violations such as collisions with obstacles or other agents. Our approach uses a restricted class of temporal logic formulas to represent the likely intentions of the agents along with a combination of temporal logic-based optimal cost path planning and Bayesian inference to compute the probability of these intents given the current trajectory of the robot. First, we construct a large but finite hypothesis space of possible intents represented as temporal logic formulas whose atomic propositions are derived from a detailed map of the robot’s workspace. Next, our approach uses real-time observations of the robot’s position to update a distribution over temporal logic formulae that represent its likely intent. This is performed by using a combination of optimal cost path planning and a Boltzmann noisy rationality model. In this manner, we construct a Bayesian approach to evaluating the posterior probability of various hypotheses given the observed states and actions of the robot. Finally, we predict the future position of the robot by drawing posterior predictive samples using a Monte-Carlo method. We evaluate our framework using two different trajectory datasets that contain multiple scenarios implementing various tasks. The results show that our method can predict future positions precisely and efficiently, so that the computation time for generating a prediction is a tiny fraction of the overall time horizon.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133272669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Don’t Blindly Trust Your CNN: Towards Competency-Aware Object Detection by Evaluating Novelty in Open-Ended Environments","authors":"Rhys Howard, Samuel Barrett, Lars Kunze","doi":"10.1109/ICRA48506.2021.9562116","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9562116","url":null,"abstract":"Real-world missions require robots to detect objects in complex and changing environments. While deep learning methods for object detection are able to achieve a high level of performance, they can be unreliable when operating in environments that deviate from training conditions. However, by applying novelty detection techniques, we aim to build an architecture aware of when it cannot make reliable classifications, as well as identifying novel features/data. In this work, we have proposed and evaluated a system that assesses the competence of trained Convolutional Neural Networks (CNNs). This is achieved using three complementary introspection methods: (1) a Convolutional Variational Auto-Encoder (VAE), (2) a latent space Density-adjusted Distance Measure (DDM), and (3) a Spearman’s Rank Correlation (SRC) based approach. Finally these approaches are combined through a weighted sum, with weightings derived by maximising the correct attribution of novelty in an adversarial ‘meta-game’. Our experiments were conducted on real-world data from three datasets spread across two different domains: a planetary and an industrial setting. Results show that the proposed introspection methods are able to detect misclassifications and unknown classes indicative of novel features/data in both domains with up to 67% precision. Meanwhile classification results were either maintained or improved as a result.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133301632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dark Reciprocal-Rank: Teacher-to-student Knowledge Transfer from Self-localization Model to Graph-convolutional Neural Network","authors":"Koji Takeda, Kanji Tanaka","doi":"10.1109/ICRA48506.2021.9561158","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561158","url":null,"abstract":"In visual robot self-localization, graph-based scene representation and matching have recently attracted research interest as robust and discriminative methods for self-localization. Although effective, their computational and storage costs do not scale well to large-size environments. To alleviate this problem, we formulate self-localization as a graph classification problem and attempt to use the graph convolutional neural network (GCN) as a graph classification engine. A straightforward approach is to use visual feature descriptors that are employed by state-of-the-art self-localization systems, directly as graph node features. However, their superior performance in the original self-localization system may not necessarily be replicated in GCN-based self-localization. To address this issue, we introduce a novel teacher-to-student knowledge-transfer scheme based on rank matching, in which the reciprocal-rank vector output by an off-the-shelf state-of-the-art teacher self-localization model is used as the dark knowledge to transfer. Experiments indicate that the proposed graph-convolutional self-localization network (GCLN) can significantly outperform state-of-the-art self-localization systems, as well as the teacher classifier. The code and dataset are available at https://github.com/KojiTakeda00/Reciprocal_rank_KT_GCN.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133675894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenyu Liang, Qinyuan Ren, Xiao-Qi Chen, Junli Gao, Yan Wu
{"title":"Dexterous Manoeuvre through Touch in a Cluttered Scene","authors":"Wenyu Liang, Qinyuan Ren, Xiao-Qi Chen, Junli Gao, Yan Wu","doi":"10.1109/ICRA48506.2021.9562061","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9562061","url":null,"abstract":"Manipulation in a densely cluttered environment creates complex challenges in perception to close the control loop, many of which are due to the sophisticated physical interaction between the environment and the manipulator. Drawing from biological sensory-motor control, to handle the task in such a scenario, tactile sensing can be used to provide an additional dimension of the rich contact information from the interaction for decision making and action selection to manoeuvre towards a target. In this paper, a new tactile-based motion planning and control framework based on bioinspiration is proposed and developed for a robot manipulator to manoeuvre in a cluttered environment. An iterative two-stage machine learning approach is used in this framework: an autoencoder is used to extract important cues from tactile sensory readings while a reinforcement learning technique is used to generate optimal motion sequence to efficiently reach the given target. The framework is implemented on a KUKA LBR iiwa robot mounted with a SynTouch BioTac tactile sensor and tested with real-life experiments. The results show that the system is able to move the end-effector through the cluttered environment to reach the target effectively.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133679402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance Metrics Calculation for Assembly Systems with Exponential Reliability Machines","authors":"Yishu Bai, Liang Zhang","doi":"10.1109/ICRA48506.2021.9561947","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561947","url":null,"abstract":"Assembly systems are commonly seen in production practice, where multiple components are joined in a manufacturing process to make a final product. In this paper, a decomposition/aggregation-based method is presented to evaluate the performance metrics of assembly systems with machines following the exponential reliability model (either synchronous or asynchronous). In particular, we consider the assembly system with multiple merge operations, each connected to a single external component line. The idea of the proposed method is to decompose the assembly system into a set of virtual serial lines based on the overlapping decomposition technique, evaluate of the starvation and blockage of the merge operations, and recursively update of the virtual machines’ parameters in the thus-obtained serial lines. Then, the performance metrics of the original assembly system can be approximated based on the corresponding machines and buffers in the virtual serial lines. Numerical experiments are carried out to justify the convergence and computational efficiency of the method, as well as to evaluate the approximation accuracy of the proposed algorithm.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132162292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Environment Reconfiguration Planning for Autonomous Robotic Manipulation to overcome Mobility Constraints","authors":"Prateek Arora, C. Papachristos","doi":"10.1109/ICRA48506.2021.9560799","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9560799","url":null,"abstract":"This paper presents a novel strategy for intelligent robotic environment reconfiguration applied to overcome mobility constraints with an autonomously exploring mobile manipulation system. A realistic problem arising during exploration of unknown challenging environments is the encountering of untraversable areas –given the robot’s mobility constraints– resulting in the robot getting stuck. We propose that given manipulation capabilities of an autonomous system, it should be possible to leverage loose entities in its surrounding to reconfigure its environment, and therefore potentially restore traversability to an unreachable region. This work’s contribution is two-fold: first, it proposes a mid-range traversability estimation graph-based backend which also allows early detection of terrain gaps, and secondly, it provides an algorithm for focused environment alteration that ensures stable and valid configurations. The plans of this generic policy are evaluated to decide if they resolve the robot’s problem, and are subsequently applied. The effectiveness of the proposed approach is demonstrated via experimental studies using a relevant autonomous system within a mobility-constrained mock-up environment.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"48 77","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134480897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Behrens, Michal Nazarczuk, K. Štěpánová, M. Hoffmann, Y. Demiris, K. Mikolajczyk
{"title":"Embodied Reasoning for Discovering Object Properties via Manipulation","authors":"J. Behrens, Michal Nazarczuk, K. Štěpánová, M. Hoffmann, Y. Demiris, K. Mikolajczyk","doi":"10.1109/ICRA48506.2021.9561212","DOIUrl":"https://doi.org/10.1109/ICRA48506.2021.9561212","url":null,"abstract":"In this paper, we present an integrated system that includes reasoning from visual and natural language inputs, action and motion planning, executing tasks by a robotic arm, manipulating objects, and discovering their properties. A vision to action module recognises the scene with objects and their attributes and analyses enquiries formulated in natural language. It performs multi-modal reasoning and generates a sequence of simple actions that can be executed by a robot. The scene model and action sequence are sent to a planning and execution module that generates a motion plan with collision avoidance, simulates the actions, and executes them. We use synthetic data to train various components of the system and test on a real robot to show the generalization capabilities. We focus on a tabletop scenario with objects that can be grasped by our embodied agent i.e. a 7DoF manipulator with a two-finger gripper. We evaluate the agent on 60 representative queries repeated 3 times (e.g., ’Check what is on the other side of the soda can’) concerning different objects and tasks in the scene. We perform experiments in a simulated and real environment and report the success rate for various components of the system. Our system achieves up to 80.6% success rate on challenging scenes and queries. We also analyse and discuss the challenges that such an intelligent embodied system faces.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134620042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}