{"title":"Visual-Inertial Teach & Repeat for Aerial Robot Navigation","authors":"M. Nitsche, Facundo Pessacg, Javier Civera","doi":"10.1109/ECMR.2019.8870926","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870926","url":null,"abstract":"This paper presents a Teach & Repeat (T&R) algorithm from stereo and inertial data, targeting Unmanned Aerial Vehicles with limited on-board computational resources. We propose a tightly-coupled, relative formulation of the visual-inertial constraints that fits the T&R application. In order to achieve real-time operation on limited hardware, we constraint it to motion-only visual-inertial Bundle Adjustment and solve for the minimal set of states. For the repeat phase, we show how to generate a trajectory and smoothly follow it with a constantly changing reference frame. The proposed method is validated with the sequences of the EuRoC dataset as well as within a simulated environment, running on a standard laptop PC and on a low-cost Odroid X-U4 computer.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129778685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ajish Babu, Kerim Yener Yurtdas, C. Koch, Mehmed Yüksel
{"title":"Trajectory Following using Nonlinear Model Predictive Control and 3D Point-Cloud-based Localization for Autonomous Driving","authors":"Ajish Babu, Kerim Yener Yurtdas, C. Koch, Mehmed Yüksel","doi":"10.1109/ECMR.2019.8870956","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870956","url":null,"abstract":"In autonomous driving, the trajectory follower is one of the critical controllers which should be capable of handling different driving scenarios. Most of the existing controllers are limited to a particular driving scenario and for a specific vehicle model. In this work, the trajectory follower is formulated as a nonlinear model predictive control problem and solved using the multiple-shooting trajectory optimization method, Gauss-Newton Multiple Shooting. This solver has already been used for other control applications and provides the flexibility to use different nonlinear models. The controller is tested using a retrofitted autonomous driving platform, along with the 3D point-cloud-based mapping and localization algorithms. The nonlinear model being used is a classical kinematic bicycle model. Due to the high nonlinearity between the vehicle inputs, throttle and brake, and the acceleration, the longitudinal speed control uses an additional piece-wise linear mapping. The results from the initial tests, while following a predefined trajectory on a Go-Kart test-track, are evaluated and presented here.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124007256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distributed 3D TSDF Manifold Mapping for Multi-Robot Systems","authors":"Thibaud Duhautbout, J. Moras, J. Marzat","doi":"10.1109/ECMR.2019.8870930","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870930","url":null,"abstract":"This paper presents a new method to perform collaborative real-time dense 3D mapping in a distributed way for a multi-robot system. This method associates a Truncated Signed Distance Function (TSDF) representation with a manifold structure. Each robot owns a private map which is composed of a collection of local TSDF sub-maps called patches that are locally consistent. This private map can be shared to build a public map collecting all the patches created by the robots of the fleet. In order to maintain consistency in the global map, a mechanism of patch alignment and fusion has been added. This work has been integrated in real-time into a mapping stack, which can be used for autonomous navigation in unknown and cluttered environment. Experimental results on a team of wheeled mobile robots are reported to demonstrate the practical interest of the proposed system, in particular for the exploration of unknown areas.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126854994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emergency Landing Aware Surveillance Planning for Fixed-wing Planes","authors":"Petr Váňa, J. Faigl, Jakub Sláma","doi":"10.1109/ECMR.2019.8870933","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870933","url":null,"abstract":"In this paper, we introduce the Emergency Landing Aware Surveillance Planning (ELASP) problem that stands to find the shortest feasible trajectory to visit a given set of locations while considering a loss of thrust may happen to the vehicle at any time. Two main challenges can be identified in ELASP. First, the ELASP is a planning problem to determine a feasible close-loop trajectory visiting all given locations such that the total trajectory length is minimized, which is a variant of the traveling salesman problem. The second challenge arises from the safety constraints to determine the cost-efficient trajectory such that its altitude is sufficiently high to guarantee a gliding emergency landing to a nearby airport from any point of the trajectory. Methods to address these challenges individually already exist, but the proposed approach enables to combine the existing methods to address both challenges at the same time and returns a safe, feasible, and cost-efficient multi-goal trajectory for the curvature-constrained vehicle.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114829108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Segmentation of Depth Images into Objects Based on Polyhedral Shape Class Model","authors":"R. Cupec, D. Filko, Petra Durovic","doi":"10.1109/ECMR.2019.8870917","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870917","url":null,"abstract":"A novel approach for object detection in depth images based on a polyhedral shape class model is proposed. The proposed segmentation algorithm decides whether a subset of image points represents a physical object on the scene or not by comparing its 3D shape to several shape classes. The algorithm is designed for cluttered scenes with simple convex or hollow convex objects. The proposed algorithm is trained using a set of 3D models of objects belonging to several shape classes, which are expected to appear in the scene. The presented method is experimentally evaluated using a publicly available benchmark dataset and compared to three state-of-the art approaches.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124492879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Query Generation for Resolving Ambiguity in User's Command for a Mobile Service Robot","authors":"Kazuho Morohashi, J. Miura","doi":"10.1109/ECMR.2019.8870919","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870919","url":null,"abstract":"This paper describes a method of generating queries for resolving ambiguities in the user's command in service robotics applications. We deal with bring-me tasks, in which a robot brings a user-specified object from a distant place. A user's command may be ambiguous due to various reasons such as the uncertainty in his/her knowledge of the distant scene and the robot's knowledge. In such a case, the robot compares its recognition result with the command and generates a query for disambiguation. Based on previous VDQG (visual discriminative question generation) work, we develop a method for query generation using the concept of attribute contrast with the attribute categorization. We verified our method by comparing the user and the generated queries. We also implemented a robotic system, as a proof-of-concept, that can interact with the user and certainly achieve bring-me tasks.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116939681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erik Derner, Clara Gómez, A. C. Hernández, R. Barber, R. Babuška
{"title":"Towards Life-Long Autonomy of Mobile Robots Through Feature-Based Change Detection","authors":"Erik Derner, Clara Gómez, A. C. Hernández, R. Barber, R. Babuška","doi":"10.1109/ECMR.2019.8870940","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870940","url":null,"abstract":"Autonomous mobile robots are becoming increasingly important in many industrial and domestic environments. Dealing with unforeseen situations is a difficult problem that must be tackled in order to move closer to the ultimate goal of life-long autonomy. In computer vision-based methods employed on mobile robots, such as localization or navigation, one of the major issues is the dynamics of the scenes. The autonomous operation of the robot may become unreliable if the changes that are common in dynamic environments are not detected and managed. Moving chairs, opening and closing doors or windows, replacing objects on the desks and other changes make many conventional methods fail. To deal with that, we present a novel method for change detection based on the similarity of local visual features. The core idea of the algorithm is to distinguish important stable regions of the scene from the regions that are changing. To evaluate the change detection algorithm, we have designed a simple visual localization framework based on feature matching and we have performed a series of real-world localization experiments. The results have shown that the change detection method substantially improves the accuracy of the robot localization, compared to using the baseline localization method without change detection.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127169908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance of RGB-D camera for different object types in greenhouse conditions","authors":"Ola Ringdahl, P. Kurtser, Y. Edan","doi":"10.1109/ECMR.2019.8870935","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870935","url":null,"abstract":"RGB-D cameras play an increasingly important role in localization and autonomous navigation of mobile robots. Reasonably priced commercial RGB-D cameras have recently been developed for operation in greenhouse and outdoor conditions. They can be employed for different agricultural and horticultural operations such as harvesting, weeding, pruning and phenotyping. However, the depth information extracted from the cameras varies significantly between objects and sensing conditions. This paper presents an evaluation protocol applied to a commercially available Fotonic F80 time-of-flight RGB-D camera for eight different object types. A case study of autonomous sweet pepper harvesting was used as an exemplary agricultural task. Each of the objects chosen is a possible item that an autonomous agricultural robot must detect and localize to perform well. A total of 340 rectangular regions of interests (ROI) were marked for the extraction of performance measures of point cloud density, and variability around center of mass, 30–100 ROIs per object type. An additional 570 ROIs were generated (57 manually and 513 replicated) to evaluate the repeatability and accuracy of the point cloud. A statistical analysis was performed to evaluate the significance of differences between object types. The results show that different objects have significantly different point density. Specifically metallic materials and black colored objects had significantly less point density compared to organic and other artificial materials introduced to the scene as expected. The point cloud variability measures showed no significant differences between object types, except for the metallic knife that presented significant outliers in collected measures. The accuracy and repeatability analysis showed that 1–3 cm errors are due to the the difficulty for a human to annotate the exact same area and up to ±4 cm error is due to the sensor not generating the exact same point cloud when sensing a fixed object.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130621830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. C. Hernández, Clara Gómez, Erik Derner, R. Barber
{"title":"Indoor Scene Recognition based on Weighted Voting Schemes","authors":"A. C. Hernández, Clara Gómez, Erik Derner, R. Barber","doi":"10.1109/ECMR.2019.8870931","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870931","url":null,"abstract":"Scene understanding represents one of the most primary problems in computer vision. It implies the full knowledge of all the elements of the environment and the comprehension of the relationships between them. One of the major tasks in this process is the scene recognition, on which we focus in this work. Scene recognition is a relevant and helpful task in many robotic fields such as navigation, localization, manipulation, among others. The knowledge of the place (e.g. “office”, “classroom” or “kitchen”) can improve the performance of robots in indoor environments. This task can be difficult because of the variability, ambiguity, illumination changes, occlusions and scale variability present in this type of spaces. Commonly, this problem has been approached through the development of models based on local and global characteristics, incorporating context information and, more recently, using deep learning techniques. In this paper, we propose a multi-classifier model for scene recognition considering as priors the outcomes of independent base classifiers. We implement a weighted voting scheme based on genetic algorithms for the combination of different classifiers in order to improve the recognition performance. The results have proved the validity of our approach and how the proper combination of independent classifier models makes it possible to find a better and more efficient solution for the scene recognition problem.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131376311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nandan Banerjee, D. Lisin, Jimmy Briggs, Martín Llofriu, Mario E. Munich
{"title":"Lifelong Mapping using Adaptive Local Maps","authors":"Nandan Banerjee, D. Lisin, Jimmy Briggs, Martín Llofriu, Mario E. Munich","doi":"10.1109/ECMR.2019.8870347","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870347","url":null,"abstract":"Occupancy mapping enables a mobile robot to make intelligent planning decisions to accomplish its tasks. Adaptive local maps is an algorithm which represents the occupancy information as a set of overlapping local maps anchored to poses in the robot's trajectory. At any time, a global occupancy map can be rendered from the local maps to be used for path planning. The advantage of this approach is that the occupancy information stays consistent despite the changes in the pose estimates resulting from loop closures and localization updates. The disadvantage, however, is that the number of local maps grows over time. For long robot runs, or for multiple runs in the same space, this growth will result in redundant occupancy information, which will in turn increase the time it takes to render the global map, as well as the memory footprint of the system. In this paper, we propose a novel approach for the maintenance of an adaptive local maps system, which intelligently prunes redundant local maps, ensuring the robustness and stability required for lifelong mapping.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115094644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}