{"title":"Laser-Supported Monocular Visual Tracking for Natural Environments","authors":"Georges Chahine, C. Pradalier","doi":"10.1109/ICAR46387.2019.8981604","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981604","url":null,"abstract":"This paper presents and demonstrates a 2D laser-supported visual tracking solution, that can achieve reliable performance in unstructured scenes such as those seen in natural environment surveys. The method is shown to sufficiently stabilize scale and account for scale drift, as well as improve overall reliability. The suggested method is minimally invasive, does not require any additional parameter and does not necessarily require a laser, which can be replaced by any set of points with known depth, with no constraint on the temporal continuity of known points. We also test our method on 4 surveys, captured in a natural riverine environment, that proved to be challenging even for the state-of-the-art in visual tracking.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"154 1","pages":"801-806"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73434165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Continuous Deep Maximum Entropy Inverse Reinforcement Learning using online POMDP","authors":"Júnior A. R. Silva, V. Grassi, D. Wolf","doi":"10.1109/ICAR46387.2019.8981548","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981548","url":null,"abstract":"A vehicle navigating in an urban environment must obey traffic rules by properly setting its speed, such as staying below the road speed limit and avoiding collision with other vehicles. This is presumably the scenario that autonomous vehicles will face: they will share the traffic roads with other vehicles (autonomous or not), cooperatively interacting with them. In other words, autonomous vehicles should not only follow traffic rules, but should also behave in such a way that resembles other vehicles behavior. However, manually specification of such behavior is a time-consuming and error-prone task, since driving in urban roads is a complex task, which involves many factors. This paper presents a multitask decision making framework that learns an expert driver's behavior driving in an urban scenario containing traffic lights and other vehicles. For this purpose, Inverse Reinforcement Learning (IRL) is used to learn a reward function that explains the expert driver's behavior. Most IRL approaches require solving a Markov Decision Process (MDP) in each iteration of the algorithm to compute the optimal policy given the current rewards. Nevertheless, the computational cost of solving an MDP is high when considering large state spaces. To overcome this issue, the optimal policy is estimated by sampling trajectories in regions of the space with higher rewards. To do so, the problem is modeled as a continuous Partially Observed Markov Decision Process (POMDP), in which the intentions of other vehicles are only partially observed. An online solver is employed in order to sample trajectories given the current rewards. The efficiency of the proposed framework is demonstrated through simulations, showing that the controlled vehicle is be able to mimic an expert driver's behavior.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"20 1","pages":"382-387"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77610158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naman Patel, F. Khorrami, P. Krishnamurthy, A. Tzes
{"title":"Tightly Coupled Semantic RGB-D Inertial Odometry for Accurate Long-Term Localization and Mapping","authors":"Naman Patel, F. Khorrami, P. Krishnamurthy, A. Tzes","doi":"10.1109/ICAR46387.2019.8981658","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981658","url":null,"abstract":"In this paper, we utilize semantically enhanced feature matching and visual inertial bundle adjustment to improve the robustness of odometry especially in feature-sparse environments. A novel semantically enhanced feature matching algorithm is developed for robust: 1) medium and long-term tracking, and 2) loop-closing. Additionally, a semantic visual inertial bundle adjustment algorithm is introduced to robustly estimate pose in presence of ambiguous correspondences or in feature sparse environment. Our tightly coupled semantic RGB-D odometry approach is demonstrated on a real world indoor dataset collected using our unmanned ground vehicle (UGV). Our approach improves traditional visual odometry relying on low-level geometric features like corners, points, and planes for localization and mapping. Additionally, prior approaches are limited due to their sensitivity to scene geometry and changes in light intensity. The semantic inertial odometry is especially important to significantly reduce drifts in longer intervals.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"23 1","pages":"523-528"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78976296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Object-based differential Localization of Mobile Robots using sparse 2D Lidar Data","authors":"Marc Forstenhäusler, M. Karl, K. Dietmayer","doi":"10.1109/ICAR46387.2019.8981622","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981622","url":null,"abstract":"The highly accurate pose estimation of mobile robots with respect to a known target object is a key technology for autonomous industrial manufacturing processes. Current approaches generally assume that the environment is static and locate the robot in relation to pre-defined positions. This paper presents an implementation and validation of how to localize a mobile robot in relation to the coordinate system of the target object - a prerequisite for any kind of manipulation or interaction. This allows the robot to be localized to arbitrarily positioned objects in the environment. For experimental validation, a high-precision external tracking system is used as ground truth. In this way, objects of different shapes are evaluated from different viewpoints. We achieve a pose estimation accuracy of less than 1 cm in a real world scenario.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"45 1","pages":"598-603"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81353070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assistive method that controls joint stiffness and antagonized angle based on human joint stiffness characteristics and its application to an exoskeleton","authors":"Seigo Kimura, Ryuji Suzuki, Masashi Kashima, M. Okui, Rie Nishihama, Taro Nakamura","doi":"10.1109/ICAR46387.2019.8981664","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981664","url":null,"abstract":"In this paper, we propose an assistance method that controls joint stiffness and the antagonized angle using variable elastic elements. The proposed system changes the stiffness and angle so that they correspond to the phase of movement and performs movement assistance in cooperation with the wearer. To achieve structural variability in the configuration of stiffness and the antagonized angle, we propose a joint structure in which the artificial muscle and tension spring are antagonistically arranged. While performing a movement, motion analysis was conducted to investigate the change in joint stiffness and antagonized angle. We confirmed that the proposed joint and human joint have the same tendency while in motion.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"65 1","pages":"553-559"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88166517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ahmad, M. Ourak, Caspar Gruijthuijsen, Julie Legrand, Tom Kamiel Magda Vercauteren, J. Deprest, S. Ourselin, E. V. Poorten
{"title":"Design and Shared Control of a Flexible Endoscope with Autonomous Distal Tip Alignment","authors":"M. Ahmad, M. Ourak, Caspar Gruijthuijsen, Julie Legrand, Tom Kamiel Magda Vercauteren, J. Deprest, S. Ourselin, E. V. Poorten","doi":"10.1109/ICAR46387.2019.8981563","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981563","url":null,"abstract":"Open womb surgery for prenatal therapy is an extreme option which may induce severe side effects on both the fetus and the mother. Minimal invasive surgery (MIS) is, in general, less strenuous than open surgery. Unfortunately, the current tools and techniques are not optimized for the unique environment of the womb. Used in prenatal MIS, the physician visually assesses the in-utero environment, the fetus and the placenta. This work deals with the use of fetoscopic instruments in photocoagulation therapy using lasers for Twin-to-Twin Transfusion Syndrome (TTTS). The TTTS procedure consists of identifying and coagulating the anastomosing vessels. We propose a fetoscopic instrument with a flexible steerable distal tip as opposed to currently used rigid scopes. The enhanced dexterity improves the ability of the surgeon to coagulate otherwise hard to access regions. We anticipate usability issues and a potentially steep learning curve as currently surgeons solely work with non-bendable instruments. To alleviate this problem, a shared control approach is proposed in which the surgeon controls the position of the instrument inside the uterus while an autonomous controller controls the orientation. The system is validated by testing on a novel instrument with 2 actuated degrees of freedom in an in-silico setup featuring a real placenta image, a motion tracking system and a mechanical fulcrum point to mimic the incision port. The autonomous distal tip controller achieved an overall 4.75° RMSE with respect to the desired orientation, which is within the targeted range of orientations.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"106 1","pages":"647-653"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87869063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Health Monitoring System with Hybrid Bayesian Network for Autonomous Vehicle","authors":"I. P. Gomes, D. Wolf","doi":"10.1109/ICAR46387.2019.8981565","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981565","url":null,"abstract":"Autonomous Vehicles should transform the urban transport scenario. However, to be able to navigate completely autonomously, they also need to deal with faults that its components are subject to. Therefore, Health Monitoring System, is a component of the autonomous system which constantly monitor the integrity of those components, so that safety measures are taken as soon as an abnormal condition is detected. This paper presents a Health Monitoring System using Component-based Hierarchical approach and Hybrid Bayesian Networks with Residue Evidence for Fault Detection and Diagnosis in lateral and longitudinal controllers, and also in the GPS sensor. Finally, the results demonstrated the reliability of the proposed methods for Fault Detection and Diagnosis, and also highlighted the importance of safety protocols for Autonomous Vehicles.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"1086 1","pages":"260-265"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76715694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual attention system based on Fuzzy Classifier to define priority of traffic signs for intelligent robotic vehicle navigation purposes","authors":"Diego Renan Bruno, F. Osório","doi":"10.1109/ICAR46387.2019.8981546","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981546","url":null,"abstract":"In this paper we propose the use of Multiple Decision Attributes and Fuzzy Sets so that it is possible to classify the importance and priority of the detected traffic signs. The Analytic Hierarchy Process (AHP) was applied to calculate attribute weights, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to classify the traffic signs into their importance levels. The main objective is to contribute with a new system of perception and, through a knowledge base rules set, to be able to semantically relate the scene and to define which traffic sign is more important in a certain moment of navigation of the autonomous vehicle. The system of vision with 2D and 3D images must provide the a priori data of detection and classification of traffic signs for the fuzzy visual attention system, being able to detect the use of auxiliary signs (cones and emergency signs) and relates. Then, relate then to the detection of the navigable area in cases of road blocking (road at work, with a traffic accident, etc.) and give priority to the most important signs for the decision making of the vehicle. The results are promising and very satisfactory, we obtained an accuracy of 98.9% in the 2D classification task and 88% accuracy in the single frame 3D detection task.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"50 6 1","pages":"434-440"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91001595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Fattahi Sani, Brendan Emery, D. Caldwell, L. Mattos, N. Deshpande
{"title":"Towards Sound-source Position Estimation using Mutual Information for Next Best View Motion Planning","authors":"Mohammad Fattahi Sani, Brendan Emery, D. Caldwell, L. Mattos, N. Deshpande","doi":"10.1109/ICAR46387.2019.8981599","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981599","url":null,"abstract":"Robotic operations in the real world rely on bearing-only sensors, e.g., optical, acoustic, etc., to sense physical phenomena. Bearing-only sensors are limited because they do not provide range information. To overcome this limitation in estimating source locations, the most common solutions involve making multiple measurements from different locations, either through multiple sensors in the field or a single moving sensor, and then applying triangulation or filtering. In unknown environments with single motion-capable sensors (e.g., mobile robots with on-board sensors), planned motion of the sensor can allow accurate and efficient source position estimation. This paper presents a novel approach in estimating the locations of stationary sources, using a motion-capable sensor. The proposed method combines the concepts of Extended Kalman Filter (EKF) and Mutual Information (MI) from information theory to estimate the Next Best View (NBV) pose to which the sensor should be moved. A utility function, that accounts for the movement cost, the characteristics of the sensor, and the MI and EKF information, facilitates efficient estimation. The proposed algorithm has been implemented in the realworld for Sound-source Position Estimation (SPE), using an acoustic sensor mounted at the end of a 6 degrees-of-freedom (DOF) robotic manipulator. The algorithm, termed as NBV-SPE, proves its utility and performance through preliminary indoor and outdoor experiments for sound sources in 3D space.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"56 1","pages":"24-29"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83580058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luiza Amador Pozzobon, R. S. Guerra, G. Librelotto
{"title":"A low-cost, compliant, underactuated prosthetic hand with custom flex sensors for finger bending estimation","authors":"Luiza Amador Pozzobon, R. S. Guerra, G. Librelotto","doi":"10.1109/ICAR46387.2019.8981648","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981648","url":null,"abstract":"Access to quality prosthetics to aid in daily activities is a privilege of only 5 % of those in need of such equipment. It is noted that the high cost of these, associated with the lack of skilled labor, are two of the factors that aggravate the situation. Thus, the need for cost-effective prosthetic technologies, targeting the population of developing countries, is observed. Inspired by this problem, this paper presents the process of conceptual study, design, and prototyping of a low-cost prosthetic hand that is compliant and underactuated. The hand has a wrist of two degrees of freedom, and five independently actuated fingers. One of the main contributions of this work is the design of a low-cost optoelectronic sensor for the finger's curvature estimation.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"49 1","pages":"69-74"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83276633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}