{"title":"Adaptive Selection of Informative Path Planning Strategies via Reinforcement Learning","authors":"Taeyeong Choi, Grzegorz Cielniak","doi":"10.1109/ecmr50962.2021.9568796","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568796","url":null,"abstract":"In our previous work, we designed a systematic policy to prioritize sampling locations to lead significant accuracy improvement in spatial interpolation by using the prediction uncertainty of Gaussian Process Regression (GPR) as \"attraction force\" to deployed robots in path planning. Although the integration with Traveling Salesman Problem (TSP) solvers was also shown to produce relatively short travel distance, we here hypothesise several factors that could decrease the overall prediction precision as well because sub-optimal locations may eventually be included in their paths. To address this issue, in this paper, we first explore \"local planning\" approaches adopting various spatial ranges within which next sampling locations are prioritized to investigate their effects on the prediction performance as well as incurred travel distance. Also, Reinforcement Learning (RL)-based high-level controllers are trained to adaptively produce blended plans from a particular set of local planners to inherit unique strengths from that selection depending on latest prediction states. Our experiments on use cases of temperature monitoring robots demonstrate that the dynamic mixtures of planners can not only generate sophisticated, informative plans that a single planner could not create alone but also ensure significantly reduced travel distances at no cost of prediction reliability without any assist of additional modules for shortest path calculation.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123478322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Road Boundary Estimation using 3D LiDAR with Scanline-wise 1D Deep Feature and Particle Filtering","authors":"Yuta Nakayama, J. Miura","doi":"10.1109/ecmr50962.2021.9568784","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568784","url":null,"abstract":"Recognizing road shape is one of the fundamental functions for outdoor navigation of mobile robots and vehicles. This function is crucial for safe control and used for autonomous navigation combined with global localization using maps or GNSS. This paper describes a method of estimating the 3D structure of road boundaries using a 3D LiDAR with a combination of scanline-wise 1D feature extraction and temporal filtering by particle filter. In outdoor environments, since the road shape changes not on a horizontal plane but three-dimensionally, we model the road boundary shape with a series of 3D segments and estimate its parameters repeatedly with the feature extraction and particle filter. The proposed method is tested in terms of the feature extraction performance and the applicability of autonomous navigation.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114155310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Flexible Framework for Virtual Omnidirectional Vision to Improve Operator Situation Awareness","authors":"M. Oehler, O. Stryk","doi":"10.1109/ECMR50962.2021.9568840","DOIUrl":"https://doi.org/10.1109/ECMR50962.2021.9568840","url":null,"abstract":"During teleoperation of a mobile robot, providing good operator situation awareness is a major concern as a single mistake can lead to mission failure. Camera streams are widely used for teleoperation but offer limited field-of-view. In this paper, we present a flexible framework for virtual projections to increase situation awareness based on a novel method to fuse multiple cameras mounted anywhere on the robot. Moreover, we propose a complementary approach to improve scene understanding by fusing camera images and geometric 3D Lidar data to obtain a colorized point cloud. The implementation on a compact omnidirectional camera reduces system complexity considerably and solves multiple use-cases on a much smaller footprint compared to traditional approaches such as actuated pan-tilt units. Finally, we demonstrate the generality of the approach by application to the multi-camera system of the Boston Dynamics Spot. The software implementation is available as open-source ROS packages on the project page https://tu-darmstadt-ros-pkg.github.io/omnidirectional_vision.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114966733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recalibrating the KITTI Dataset Camera Setup for Improved Odometry Accuracy","authors":"Igor Cvisic, Ivan Marković, Ivan Petrović","doi":"10.1109/ecmr50962.2021.9568821","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568821","url":null,"abstract":"Over the last decade, one of the most relevant public datasets for evaluating odometry accuracy is the KITTI dataset. Beside the quality and rich sensor setup, its success is also due to the online evaluation tool, which enables researchers to bench-mark and compare algorithms. The results are evaluated on the test subset solely, without any knowledge about the ground truth, yielding unbiased, overfit free and therefore relevant validation for robot localization based on cameras, 3D laser or combination of both. However, as any sensor setup, it requires prior calibration and rectified stereo images are provided, introducing dependence on the default calibration parameters. Given that, a natural question arises if a better set of calibration parameters can be found that would yield higher odometry accuracy. In this paper, we propose a new approach for one shot calibration of the KITTI dataset multiple camera setup. The approach yields better calibration parameters, both in the sense of lower calibration reprojection errors and lower visual odometry error. We conducted experiments where we show for three different odometry algorithms, namely SOFT2, ORB-SLAM2 and VISO2, that odometry accuracy is significantly improved with the pro-posed calibration parameters. Moreover, our odometry, SOFT2, in conjunction with the proposed calibration method achieved the highest accuracy on the official KITTI scoreboard with 0.53% translational and 0.0009 deg/m rotational error, outperforming even 3D laser-based methods.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122997921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning to plan people-aware trajectories for robot navigation: A genetic algorithm*","authors":"Alberto Bacchin, Gloria Beraldo, E. Menegatti","doi":"10.1109/ecmr50962.2021.9568804","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568804","url":null,"abstract":"Nowadays, one of the emergent challenges in mobile robotics consists of navigating safely and efficiently in dynamic environments populated by people. This paper focuses on the robot’s motion planning by proposing a learning-based method to adjust the robot’s trajectories to people’s movements by respecting the proxemics rules. With this purpose, we design a genetic algorithm to train the navigation stack of ROS during the goal-based navigation while the robot is disturbed by people. We also present a simulation environment based on Gazebo that extends the animated model for emulating a more natural human’s walking. Preliminary results show that our approach is able to plan people-aware robot’s trajectories respecting proxemics limits without worsening the performance in navigation.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123217306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Let Your Robot Go – Challenges of a Decentralized Remote Robot Competition","authors":"Tim Laue, Arne Moos, Patrick Göttsch","doi":"10.1109/ecmr50962.2021.9568845","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568845","url":null,"abstract":"Holding an international robot competition, such as the RoboCup, usually requires many people and their robots to come together in one place for several days. In case of international travel restrictions, such an event cannot take place in its intended way. In this paper, we describe the German Open Replacement Event (GORE), a robot tournament for RoboCup Standard Platform League teams, which was organized in a decentralized way and required remote participation. Furthermore, we provide insights into the specific technical and scientific challenges the participating teams had to face, along with an outlook on implications for future robot competitions.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115276025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust collaborative collision avoidance between robots with nearly symmetric crossing trajectories","authors":"Grimaldo Silva, K. Rekik, J. Crowley","doi":"10.1109/ecmr50962.2021.9568823","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568823","url":null,"abstract":"The growth in both acceptance and usage of mobile robots have given risen to novel challenges in robot navigation. Often, robots that share a space but are unable to communicate are required to safely avoid each other even under sensor noise. Current approaches have often relied on the assumption that collaboration is always done correctly, in practice, sensor noise might lead robots to make avoidance motions that are not mutually beneficial and do not actually decrease the collision risk. Our approach intends to mitigate the negative impact of sensor noise in collaborative collision avoidance of robots. As a consequence, even if robots initially take non-mutually beneficial avoidance motions they would correctly perceive their role in the next decision step.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115710456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Louis Gallagher, V. Kumar, S. Yogamani, J. McDonald
{"title":"A Hybrid Sparse-Dense Monocular SLAM System for Autonomous Driving","authors":"Louis Gallagher, V. Kumar, S. Yogamani, J. McDonald","doi":"10.1109/ecmr50962.2021.9568797","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568797","url":null,"abstract":"In this paper, we present a system for incrementally reconstructing a dense 3D model of the geometry of an outdoor environment using a single monocular camera attached to a moving vehicle. Dense models provide a rich representation of the environment facilitating higher-level scene understanding, perception, and planning. Our system employs dense depth prediction with a hybrid mapping architecture combining state-of-the-art sparse features and dense fusion-based visual SLAM algorithms within an integrated framework. Our novel contributions include design of hybrid sparse-dense camera tracking and loop closure, and scale estimation improvements in dense depth prediction. We use the motion estimates from the sparse method to overcome the large and variable inter-frame displacement typical of outdoor vehicle scenarios. Our system then registers the live image with the dense model using whole-image alignment. This enables the fusion of the live frame and dense depth prediction into the model. Global consistency and alignment between the sparse and dense models are achieved by applying pose constraints from the sparse method directly within the deformation of the dense model. We provide qualitative and quantitative results for both trajectory estimation and surface reconstruction accuracy, demonstrating competitive performance on the KITTI dataset. Qualitative results of the proposed approach are illustrated in https://youtu.be/Pn2uaVqjskY. Source code for the project is publicly available at the following repository https://github.com/robotvisionmu/DenseMonoSLAM","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115794792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A blind-spot-aware optimization-based planner for safe robot navigation","authors":"Kenny Schlegel, Peter Weissig, P. Protzel","doi":"10.1109/ecmr50962.2021.9568820","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568820","url":null,"abstract":"Safe mobile robot navigation should consider not only collision avoidance with current obstacles but also include non-visible areas (to which we refer as blind spots) and the resulting risk of collision with hidden moving objects (e.g. people). Such capability is important for mobile robots operating in environments shared with humans - for instance a shopping assistant robot in a supermarket. This work aims to extend an existing motion planner for mobile robots (the Time Elastic Band planner) by including blind spots. As a result, the final planner does not only consider static and visible dynamic obstacles, but handles blind spots, too. To identify such blind spots, we define and use critical corners that imply them. Hence, our contributions in this paper are creating a critical corner detector, which operates on laser scan data, and the extension of a factor-graph-based path planner. We evaluate the proposed method standalone and in our simulation environment of a supermarket. It can be seen that the implementation is capable of detecting and dealing with blind spots. Finally, we provide source code for both the detector and the planner extensions.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115197716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Kozák, Tomás Pivonka, Pavlos Avgoustinakis, Lukás Majer, Miroslav Kulich, L. Preucil, Luis G. Camara
{"title":"Robust Visual Teach and Repeat Navigation for Unmanned Aerial Vehicles","authors":"V. Kozák, Tomás Pivonka, Pavlos Avgoustinakis, Lukás Majer, Miroslav Kulich, L. Preucil, Luis G. Camara","doi":"10.1109/ecmr50962.2021.9568807","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568807","url":null,"abstract":"Vision-based navigation is one of the leading tasks in mobile robotics. It, however, introduces additional challenges in long-term autonomy due to its reliance on stable visual features. As such, visual navigation methods are often sensitive to appearance changes and unreliable in environments with low feature density. We present a teach-and-repeat navigation system for unmanned aerial vehicles (UAVs) equipped with a low-end camera. We use a novel visual place recognition methodology based on high-level CNN features to localize a robot on a previously traversed trajectory and to directly calculate heading corrections for navigation. The developed navigation method is fully vision-based and independent of other sensory information, making it universal and easily transferable. The system has been experimentally verified and evaluated with respect to a state-of-the-art ORB2-TaR navigation system. It showed comparable results in terms of its precision and robustness to environmental changes. In addition, the system was able to safely navigate in environments with low feature density and to reliably solve the wake-up robot problem.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"335 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124304710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}