{"title":"A Bio-Signal Enhanced Adaptive Impedance Controller for Lower Limb Exoskeleton","authors":"Lin-qing Xia, Yachun Feng, Fan Chen, Xinyu Wu","doi":"10.1109/ICRA40945.2020.9196774","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196774","url":null,"abstract":"The problem of human-exoskeleton interaction with uncertain dynamical parameters remains an open-ended research area. It requires an elaborate control strategy design of the exoskeleton to accommodate complex and unpredictable human body movements. In this paper, we proposed a novel control approach for the lower limb exoskeleton to realize its task of assisting the human operator walking. The main challenge of this study was to determine the human lower extremity dynamics, such as the joint torque. For this purpose, we developed a neural network-based torque estimation method. It can predict the joint torques of humans with surface electromyogram signals (sEMG). Then an radial basis function neural network (RBF NN) enhanced adaptive impedance controller is employed to ensure exoskeleton track desired motion trajectory of a human operator. Algorithm performance is evaluated with two healthy subjects and the rehabilitation lower-limb exoskeleton developed by Shenzhen Institutes of Advanced Technology (SIAT).","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"12 1","pages":"4739-4744"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84315270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CCAN: Constraint Co-Attention Network for Instance Grasping","authors":"Junhao Cai, X. Tao, Hui Cheng, Zhanpeng Zhang","doi":"10.1109/ICRA40945.2020.9197182","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197182","url":null,"abstract":"Instance grasping is a challenging robotic grasping task when a robot aims to grasp a specified target object in cluttered scenes. In this paper, we propose a novel end-to-end instance grasping method using only monocular workspace and query images, where the workspace image includes several objects and the query image only contains the target object. To effectively extract discriminative features and facilitate the training process, a learning-based method, referred to as Constraint Co-Attention Network (CCAN), is proposed which consists of a constraint co-attention module and a grasp affordance predictor. An effective co-attention module is presented to construct the features of a workspace image from the extracted features of the query image. By introducing soft constraints into the co-attention module, it highlights the target object’s features while trivializes other objects’ features in the workspace image. Using the features extracted from the co-attention module, the cascaded grasp affordance interpreter network only predicts the grasp configuration for the target object. The training of the CCAN is totally based on simulated self-supervision. Extensive qualitative and quantitative experiments show the effectiveness of our method both in simulated and real-world environments even for totally unseen objects.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"8 1","pages":"8353-8359"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84972810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clara Gómez, M. Fehr, A. Millane, A. C. Hernández, Juan I. Nieto, R. Barber, R. Siegwart
{"title":"Hybrid Topological and 3D Dense Mapping through Autonomous Exploration for Large Indoor Environments","authors":"Clara Gómez, M. Fehr, A. Millane, A. C. Hernández, Juan I. Nieto, R. Barber, R. Siegwart","doi":"10.1109/ICRA40945.2020.9197226","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197226","url":null,"abstract":"Robots require a detailed understanding of the 3D structure of the environment for autonomous navigation and path planning. A popular approach is to represent the environment using metric, dense 3D maps such as 3D occupancy grids. However, in large environments the computational power required for most state-of-the-art 3D dense mapping systems is compromising precision and real-time capability. In this work, we propose a novel mapping method that is able to build and maintain 3D dense representations for large indoor environments using standard CPUs. Topological global representations and 3D dense submaps are maintained as hybrid global map. Submaps are generated for every new visited place. A place (room) is identified as an isolated part of the environment connected to other parts through transit areas (doors). This semantic partitioning of the environment allows for a more efficient mapping and path-planning. We also propose a method for autonomous exploration that directly builds the hybrid representation in real time.We validate the real-time performance of our hybrid system on simulated and real environments regarding mapping and path-planning. The improvement in execution time and memory requirements upholds the contribution of the proposed work.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"38 1","pages":"9673-9679"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85624719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonio Prado, Xiya Cao, Xiangzhuo Ding, S. Agrawal
{"title":"Prediction of Gait Cycle Percentage Using Instrumented Shoes with Artificial Neural Networks","authors":"Antonio Prado, Xiya Cao, Xiangzhuo Ding, S. Agrawal","doi":"10.1109/ICRA40945.2020.9196747","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196747","url":null,"abstract":"Gait training is widely used to treat gait abnormalities. Traditional gait measurement systems are limited to instrumented laboratories. Even though gait measurements can be made in these settings, it is challenging to estimate gait parameters robustly in real-time for gait rehabilitation, especially when walking over-ground. In this paper, we present a novel approach to track the continuous gait cycle during overground walking outside the laboratory. In this approach, we instrument standard footwear with a sensorized insole and an inertial measurement unit. Artificial neural networks are used on the raw data obtained from the insoles and IMUs to compute the continuous percentage of the gait cycle for the entire walking session. We show in this paper that when tested with novel subjects, we can predict the gait cycle with a Root Mean Square Error (RMSE) of 7.2%. The onset of each cycle can be detected within an RMSE time of 41.5 ms with a 99% detection rate. The algorithm was tested with 18840 strides collected from 24 adults. In this paper, we tested a combination of fully-connected layers, an Encoder-Decoder using convolutional layers, and recurrent layers to identify an architecture that provided the best performance.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"54 1","pages":"2834-2840"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85875753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using multiple short hops for multicopter navigation with only inertial sensors","authors":"Xiangyu Wu, M. Mueller","doi":"10.1109/ICRA40945.2020.9196610","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196610","url":null,"abstract":"In certain challenging environments, such as inside buildings on fire, the main sensors (e.g. cameras, LiDARs and GPS systems) used for multicopter localization can become unavailable. Direct integration of the inertial navigation sensors (the accelerometer and rate gyroscope), is however unaffected by external disturbances, but the rapid error accumulation quickly makes a naive application of such a strategy feasible only for very short durations. In this work we propose a motion strategy for reducing the inertial navigation state estimation error of multicopters. The proposed strategy breaks a long duration flight into multiple short duration hops between which the vehicle remains stationary on the ground. When the vehicle is stationary, zero-velocity pseudo-measurements are introduced to an extended Kalman Filter to reduce the state estimation error. We perform experiments for closed-loop control of a multicopter for evaluation. The mean absolute position estimation error was 3.4% over a total flight distance of 5m in the experiments. The results showed a 80% reduction compared to the standard inertial navigation method without using this strategy. In addition, an additional experiment with total flight distance of 10m is conducted to demonstrate the ability of this method to navigate a multicopter in real-world environment. The final trajectory tracking error was 3% of the total flight distance.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"109 1","pages":"8559-8565"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77102778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Nuckols, K. Swaminathan, Sangjun Lee, L. Awad, C. Walsh, R. Howe
{"title":"Automated detection of soleus concentric contraction in variable gait conditions for improved exosuit control","authors":"R. Nuckols, K. Swaminathan, Sangjun Lee, L. Awad, C. Walsh, R. Howe","doi":"10.1109/ICRA40945.2020.9197428","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197428","url":null,"abstract":"Exosuits can reduce metabolic demand and improve gait. Controllers explicitly derived from biological mechanisms that reflect the user's joint or muscle dynamics should in theory allow for individualized assistance and enable adaptation to changing gait. With the goal of developing an exosuit control strategy based on muscle power, we present an approach for estimating, at real time rates, when the soleus muscle begins to generate positive power. A low-profile ultrasound system recorded B-mode images of the soleus in walking individuals. An automated routine using optical flow segmented the data to a normalized gait cycle and estimated the onset of concentric contraction at real-time rates (~130Hz). Segmentation error was within 1% of the gait cycle compared to using ground reaction forces. Estimation of onset of concentric contraction had a high correlation (R2=0.92) and an RMSE of 2.6% gait cycle relative to manual estimation. We demonstrated the ability to estimate the onset of concentric contraction during fixed speed walking in healthy individuals that ranged from 39.3% to 45.8% of the gait cycle and feasibility in two persons post-stroke walking at comfortable walking speed. We also showed the ability to measure a shift in onset timing to 7% earlier when the biological system adapts from level to incline walking. Finally, we provided an initial evaluation for how the onset of concentric contraction might be used to inform exosuit control in level and incline walking.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"21 1","pages":"4855-4862"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78557831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Kassem Zein, Abbas Sidaoui, Daniel C. Asmar, I. Elhajj
{"title":"Enhanced Teleoperation Using Autocomplete","authors":"Mohammad Kassem Zein, Abbas Sidaoui, Daniel C. Asmar, I. Elhajj","doi":"10.1109/ICRA40945.2020.9197140","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197140","url":null,"abstract":"Controlling and manning robots from a remote location is difficult because of the limitations one faces in perception and available degrees of actuation. Although humans can become skilled teleoperators, the amount of training time required to acquire such skills is typically very high. In this paper, we propose a novel solution (named Autocomplete) to aid novice teleoperators in manning robots adroitly. At the input side, Autocomplete relies on machine learning to detect and categorize human inputs as one from a group of motion primitives. Once a desired motion is recognized, at the actuation side an automated command replaces the human input in performing the desired action. So far, Autocomplete can recognize and synthesize lines, arcs, full circles, 3-D helices, and sine trajectories. Autocomplete was tested in simulation on the teleoperation of an unmanned aerial vehicle, and results demonstrate the advantages of the proposed solution versus manual steering.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"126 1","pages":"9178-9184"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73603108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nonlinear Synchronization Control for Short-Range Mobile Sensors Drifting in Geophysical Flows","authors":"Cong Wei, H. Tanner, M. A. Hsieh","doi":"10.1109/ICRA40945.2020.9196701","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196701","url":null,"abstract":"This paper presents a synchronization controller for mobile sensors that are minimally actuated and can only communicate with each other over a very short range. This work is motivated by ocean monitoring applications where large-scale sensor networks consisting of drifters with minimal actuation capabilities, i.e., active drifters, are employed. We assume drifters are tasked to monitor regions consisting of gyre flows where their trajectories are periodic. As drifters in neighboring regions move into each other's proximity, it presents an opportunity for data exchange and synchronization to ensure future rendezvous. We present a nonlinear synchronization control strategy to ensure that drifters will periodically rendezvous and maximize the time they are in their rendezvous regions. Numerical simulations and small-scale experiments validate the efficacy of the control strategy and hint at extensions to large-scale mobile sensor networks.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"907-913"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85505442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Tanwani, P. Sermanet, Andy Yan, Raghav V. Anand, Mariano Phielipp, Ken Goldberg
{"title":"Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos","authors":"A. Tanwani, P. Sermanet, Andy Yan, Raghav V. Anand, Mariano Phielipp, Ken Goldberg","doi":"10.1109/ICRA40945.2020.9197324","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197324","url":null,"abstract":"Learning meaningful visual representations in an embedding space can facilitate generalization in downstream tasks such as action segmentation and imitation. In this paper, we learn a motion-centric representation of surgical video demonstrations by grouping them into action segments/subgoals/options in a semi-supervised manner. We present Motion2Vec, an algorithm that learns a deep embedding feature space from video observations by minimizing a metric learning loss in a Siamese network: images from the same action segment are pulled together while pushed away from randomly sampled images of other segments, while respecting the temporal ordering of the images. The embeddings are iteratively segmented with a recurrent neural network for a given parametrization of the embedding space after pre-training the Siamese network. We only use a small set of labeled video segments to semantically align the embedding space and assign pseudo-labels to the remaining unlabeled data by inference on the learned model parameters. We demonstrate the use of this representation to imitate surgical suturing kinematic motions from publicly available videos of the JIGSAWS dataset. Results give 85.5% segmentation accuracy on average suggesting performance improvement over several state-of-the-art baselines, while kinematic pose imitation gives 0.94 centimeter error in position per observation on the test set. Videos, code and data are available at: https://sites.google.com/view/motion2vec","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"2174-2181"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84069170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Feature-Based Underwater Path Planning Approach using Multiple Perspective Prior Maps","authors":"Daniel Cagara, M. Dunbabin, P. Rigby","doi":"10.1109/ICRA40945.2020.9196680","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196680","url":null,"abstract":"This paper presents a path planning methodology which enables Autonomous Underwater Vehicles (AUVs) to navigate in shallow complex environments such as coral reefs. The approach leverages prior information from an aerial photographic survey, and derived bathymetric information of the corresponding area. From these prior maps, a set of features is obtained which define an expected arrangement of objects and bathymetry likely to be perceived by the AUV when underwater. A navigation graph is then constructed by predicting the arrangement of features visible from a set of test points within the prior, which allows the calculation of the shortest paths from any pair of start and destination points. A maximum likelihood function is defined which allows the AUV to match its observations to the navigation graph as it undertakes its mission. To improve robustness, the history of observed features are retained to facilitate possible recovery from non-detectable or misclassified objects. The approach is evaluated using a photo-realistic simulated environment, and results illustrate the merits of the approach even when only a relatively small number of features can be identified from the prior map.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"78 1","pages":"8573-8579"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84086342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}