{"title":"Towards accurate modeling of modular soft pneumatic robots: from volume FEM to Cosserat rod","authors":"M. Wiese, B. Cao, A. Raatz","doi":"10.1109/IROS47612.2022.9981628","DOIUrl":"https://doi.org/10.1109/IROS47612.2022.9981628","url":null,"abstract":"Compared to their rigid counterparts, soft material robotic systems offer great advantages when it comes to flexibility and adaptability. Despite their advantages, modeling of soft systems is still a challenging task, due to the continuous and often highly nonlinear nature of deformation these systems exhibit. Tasks like motion planning or design optimization of soft robots require computationally cheap models of the system's behavior. In this paper we address this need by deriving operational point dependent Cosserat rod models from detailed volume finite element models (FEM). While the latter offer detailed simulations, they generally come with high computational burden that hinders them from being used in time critical model-based methods like motion planning or control. Basic Cosserat rod models promise to provide computationally efficient mechanical models of soft continuum robots. By using a detailed FE model in an offline stage to identify operational point dependent Cosserat rod models, we bring together the accuracy of volumetric FEM with the efficiency of Cosserat rod models. We apply the approach to a fiber reinforced soft pneumatic bending actuator module (SPA module) and evaluate the model's predictive capabilities for a single module as well as a two-module robot.","PeriodicalId":431373,"journal":{"name":"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128770222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanzhong Liu, Bidan Huang, Qiang Li, Yu Zheng, Yonggen Ling, Wangwei Lee, Yi Liu, Ya-Yen Tsai, Chenguang Yang
{"title":"Multi-fingered Tactile Servoing for Grasping Adjustment under Partial Observation","authors":"Hanzhong Liu, Bidan Huang, Qiang Li, Yu Zheng, Yonggen Ling, Wangwei Lee, Yi Liu, Ya-Yen Tsai, Chenguang Yang","doi":"10.1109/IROS47612.2022.9981464","DOIUrl":"https://doi.org/10.1109/IROS47612.2022.9981464","url":null,"abstract":"Grasping of objects using multi-fingered robotic hands often fails due to small uncertainties in the hand motion control and the object's pose estimation. To tackle this problem, we propose a grasping adjustment strategy based on tactile seroving. Our technique employs feedback from a sensorized multi-fingered robotic hand to collaboratively servo the fingers and palm to achieve the desired grasp. We demonstrate the performance of our method through simulation and physical experiments by having a robot grasp different objects under conditions of variable uncertainty. The results show that our approach achieved a higher success rate and tolerated greater uncertainty than an open-looped grasp.","PeriodicalId":431373,"journal":{"name":"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128570061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tae-Gyu Song, Young-Ha Shin, Seungwoo Hong, Hyungho Chris Choi, Joon-ha Kim, Hae-won Park
{"title":"DRPD, Dual Reduction Ratio Planetary Drive for Articulated Robot Actuators","authors":"Tae-Gyu Song, Young-Ha Shin, Seungwoo Hong, Hyungho Chris Choi, Joon-ha Kim, Hae-won Park","doi":"10.1109/IROS47612.2022.9981201","DOIUrl":"https://doi.org/10.1109/IROS47612.2022.9981201","url":null,"abstract":"This paper presents a reduction mechanism for robot actuators that can switch between two types of reduction ratio. By fixing the carrier or ring gear of the proposed actuator which is based on the 3K compound planetary drive, the actuator can shift its reduction ratio. For compact design with reduced weight of the actuator, unique pawl brake mechanism interacting with cams and micro servos for switching mechanism is designed. The resulting prototype module has a reduction ratio of 6.91 and 44.93 for ‘low-reduction’ and ‘high-reduction’ ratios, respectively. Reduction ratios can be easily adjusted by modifying the pitch diameters of gears. Experimental results demonstrate that the proposed actuator could extend its operation region via two reduction modes that are interchangeable with gear shifting.","PeriodicalId":431373,"journal":{"name":"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124565731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iman Nematollahi, Erick Rosete-Beas, Seyed Mahdi B. Azad, Raghunandan Rajan, F. Hutter, Wolfram Burgard
{"title":"T3VIP: Transformation-based $3mathrm{D}$ Video Prediction","authors":"Iman Nematollahi, Erick Rosete-Beas, Seyed Mahdi B. Azad, Raghunandan Rajan, F. Hutter, Wolfram Burgard","doi":"10.1109/IROS47612.2022.9981187","DOIUrl":"https://doi.org/10.1109/IROS47612.2022.9981187","url":null,"abstract":"For autonomous skill acquisition, robots have to learn about the physical rules governing the 3D world dynamics from their own past experience to predict and reason about plausible future outcomes. To this end, we propose a transformation-based 3D video prediction (T3VIP) approach that explicitly models the 3D motion by decomposing a scene into its object parts and predicting their corresponding rigid transformations. Our model is fully unsupervised, captures the stochastic nature of the real world, and the observational cues in image and point cloud domains constitute its learning signals. To fully leverage all the 2D and 3D observational signals, we equip our model with automatic hyperparameter optimization (HPO) to interpret the best way of learning from them. To the best of our knowledge, our model is the first generative model that provides an RGB-D video prediction of the future for a static camera. Our extensive evaluation with simulated and real-world datasets demonstrates that our formulation leads to interpretable 3D models that predict future depth videos while achieving on-par performance with 2D models on RGB video prediction. Moreover, we demonstrate that our model outperforms 2D baselines on visuomotor control. Videos, code, dataset, and pre-trained models are available at http://t3vip.cs.uni-freiburg.de.","PeriodicalId":431373,"journal":{"name":"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124592635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scalable and Modular Ultra-Wideband Aided Inertial Navigation","authors":"R. Jung, S. Weiss","doi":"10.1109/IROS47612.2022.9981937","DOIUrl":"https://doi.org/10.1109/IROS47612.2022.9981937","url":null,"abstract":"Navigating accurately in potentially GPS-denied environments is a perquisite of autonomous systems. Relative localization based on ultra-wideband (UWB) is - especially indoors - a promising technology. In this paper, we present a probabilistic filter based Modular Multi-Sensor Fusion (MMSF) approach with the capability of using efficiently all information in a fully meshed UWB ranging network. This allows an accurate mobile agent state estimation and the calibration of the ranging network's spatial constellation. We advocate a new paradigm that includes elements from Collaborative State Estimation (CSE) and allows us considering all stationary UWB anchors and the mobile agent as a decentralized set of estimtors/filters. With this, our method can include all meshed (inter-)sensor observations tightly coupled in a modular estimator. We show that the application of our CSE-inspired method in such a context breaks the computational barrier. Otherwise, it would, for the sakeof complexity-reduction, prohibit the use of all available information or would lead to significant estimator inconsistencies due to coarse approximations. We compare the proposed approach against different MMSF strategies in terms of execution time, accuracy, and filter credibility on both synthetic data and on a dataset from real Unmanned Aerial Vehicles (UAVs).","PeriodicalId":431373,"journal":{"name":"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126774273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computation and Selection of Secure Gravity Based Caging Grasps of Planar Objects","authors":"Alon Shirizly, E. Rimon","doi":"10.1109/IROS47612.2022.9982151","DOIUrl":"https://doi.org/10.1109/IROS47612.2022.9982151","url":null,"abstract":"Gravity based caging grasps are robotic grasps where the robot hand passively supports an object against gravity. When a robot hand supports an object at a local minimum of the object gravitational energy, the robot hand forms a basket like grasp of the object. Any object movement in a basket grasp requires an increase of the object gravitational energy, thus allowing secure object pickup and transport with robot hands that use a small number fingers. The basket grasp depth measures the minimal additional energy the object must acquire to escape the basket grasp. This paper describes a computation scheme that determines the depth of entire sets of candidate basket grasps associated with alternative finger placements on the object boundary before pickup. The computation relies on categorization of escape stances that mark the basket grasp depth: double-support escapes are first analyzed and computed, then single-support escapes are analyzed and computed. The minimum energy combination of both types of escape stances defines the depth of entire sets of candidate basket grasps, which is then used to identify the deepest and hence most secure basket grasp. The computation scheme is fully implemented and demonstrated on several examples with reported run-times.","PeriodicalId":431373,"journal":{"name":"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123879805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-Time Distributed Multi-Robot Target Tracking via Virtual Pheromones","authors":"Joseph Prince Mathew, Cameron Nowzari","doi":"10.1109/IROS47612.2022.9981262","DOIUrl":"https://doi.org/10.1109/IROS47612.2022.9981262","url":null,"abstract":"Actively searching for targets using a multi-agent system in an unknown environment poses a two-pronged prob-lem, where on the one hand we need agents to cover as much of the environment as possible and on the other have a higher density of agents where there are potential targets to maximize detection performance. This paper proposes a fully distributed solution for an ad hoc network of agents to cooperatively search an unknown environment and actively track found targets. The solution combines a distributed pheromone-based coverage control strategy with a distributed target selection mechanism.","PeriodicalId":431373,"journal":{"name":"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123935980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semi-Automatic Infrared Calibration for Augmented Reality Systems in Surgery*","authors":"Hisham Iqbal, F. Baena","doi":"10.1109/IROS47612.2022.9982215","DOIUrl":"https://doi.org/10.1109/IROS47612.2022.9982215","url":null,"abstract":"Augmented reality (AR) has the potential to improve the immersion and efficiency of computer-assisted orthopaedic surgery (CAOS) by allowing surgeons to maintain focus on the operating site rather than external displays in the operating theatre. Successful deployment of AR to CAOS requires a calibration that can accurately calculate the spatial relationship between real and holographic objects. Several studies attempt this calibration through manual alignment or with additional fiducial markers in the surgical scene. We propose a calibration system that offers a direct method for the calibration of AR head-mounted displays (HMDs) with CAOS systems, by using infrared-reflective marker-arrays widely used in CAOS. In our fast, user-agnostic setup, a HoloLens 2 detected the pose of marker arrays using infrared response and time-of-flight depth obtained through sensors onboard the HMD. Registration with a commercially available CAOS system was achieved when an IR marker-array was visible to both devices. Study tests found relative-tracking mean errors of 2.03 mm and 1.12° when calculating the relative pose between two static marker-arrays at short ranges. When using the calibration result to provide in-situ holographic guidance for a simulated wire- insertion task, a pre-clinical test reported mean errors of 2.07 mm and 1.54° when compared to a pre-planned trajectory.","PeriodicalId":431373,"journal":{"name":"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123988251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Manabe, Rikuto Fukunaga, K. Nakatsuma, M. Kumon
{"title":"Object Surface Recognition using Microphone Array by Acoustic Standing Wave","authors":"T. Manabe, Rikuto Fukunaga, K. Nakatsuma, M. Kumon","doi":"10.1109/IROS47612.2022.9981386","DOIUrl":"https://doi.org/10.1109/IROS47612.2022.9981386","url":null,"abstract":"This paper proposes a microphone array with a speaker to recognize the shape of the surface of the target object by using the standing wave between the transmitted and the reflected acoustic signals. Because the profile of the distance spectrum encodes both the distance to the target and the distance to the edges of the target's surface, this paper proposes to fuse distance spectra using a microphone array to estimate the three-dimensional structure of the target surface. The proposed approach was verified through numerical simulations and outdoor field experiments. Results showed the effectiveness of the method as it could extract the shape of the board located 2m in front of the microphone array by using a chirp tone with 20kHz bandwidth.","PeriodicalId":431373,"journal":{"name":"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114351149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiayi Wang, T. Lembono, Sanghyun Kim, S. Calinon, S. Vijayakumar, S. Tonneau
{"title":"Learning to Guide Online Multi-Contact Receding Horizon Planning","authors":"Jiayi Wang, T. Lembono, Sanghyun Kim, S. Calinon, S. Vijayakumar, S. Tonneau","doi":"10.1109/IROS47612.2022.9981234","DOIUrl":"https://doi.org/10.1109/IROS47612.2022.9981234","url":null,"abstract":"In Receding Horizon Planning (RHP), it is critical that the motion being executed facilitates the completion of the task, e.g. building momentum to overcome large obstacles. This requires a value function to inform the desirability of robot states. However, given the complex dynamics, value functions are often approximated by expensive computation of trajectories in an extended planning horizon. In this work, to achieve online multi-contact Receding Horizon Planning (RHP), we propose to learn an oracle that can predict local objectives (intermediate goals) for a given task based on the current robot state and the environment. Then, we use these local objectives to construct local value functions to guide a short-horizon RHP. To obtain the oracle, we take a supervised learning approach, and we present an incremental training scheme that can improve the prediction accuracy by adding demonstrations on how to recover from failures. We compare our approach against the baseline (long-horizon RHP) for planning centroidal trajectories of humanoid walking on moderate slopes as well as large slopes where static stability cannot be achieved. We validate these trajectories by tracking them via a whole-body inverse dynamics controller in simulation. We show that our approach can achieve online RHP for 95%-98.6% cycles, outperforming the baseline (8%-51.2%).","PeriodicalId":431373,"journal":{"name":"2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114614753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}