Diego Delgado-Mena, E. Pereira, C. Alén-Cordero, S. Maldonado-Bascón, P. Gil-Jiménez
{"title":"Control architecture for a novel Leg-Based Stair-Climbing Wheelchair","authors":"Diego Delgado-Mena, E. Pereira, C. Alén-Cordero, S. Maldonado-Bascón, P. Gil-Jiménez","doi":"10.1109/ecmr50962.2021.9568794","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568794","url":null,"abstract":"Based on a recently proposed leg-based stairclimbing wheelchair, this work presents the control architecture for this mechanism. The objective of this work is to propose a step by step control methodology, which can define the positions of the wheels to climb up/down stairs. This will simplify the control of the seventeen actuators involved in climbing up and down tasks. The strategy proposed in this work makes the system robust to sensor uncertainties and small errors in the mechanical parameters, making the structure safer. This control architecture facilitates the practical implementation of the kinematic control. In addition, this strategy is also useful to optimize the configuration of the mechanism.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127692283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Energy Constrained Online Coverage Path Planning with a Lower Bound For the Optimal Performance","authors":"Sedat Dogru, Lino Marques","doi":"10.1109/ecmr50962.2021.9568816","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568816","url":null,"abstract":"Coverage path planning is a problem that is faced on a daily base by different robots varying from indoor cleaning robots to agricultural drones. These platforms are sometimes expected to cover previously unknown areas, which may be too large to cover with a single battery charge, requiring multiple visits to a charging or a tanking station. This paper proposes a new method for this energy constrained coverage path planning problem. The proposed approach derives from contour following, and it provides superior performance compared to the existing work in the literature. Additionally, environment topology independent bounds are derived for the minimum number of charges and energy consumed for the energy constrained coverage path planning problem.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115404461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rendering and Tracking the Directional TSDF: Modeling Surface Orientation for Coherent Maps","authors":"Malte Splietker, Sven Behnke","doi":"10.1109/ecmr50962.2021.9568830","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568830","url":null,"abstract":"Dense real-time tracking and mapping from RGB-D images is an important tool for many robotic applications, such as navigation or grasping. The recently presented Directional Truncated Signed Distance Function (DTSDF) is an augmentation of the regular TSDF and shows potential for more coherent maps and improved tracking performance. In this work, we present methods for rendering depth- and color maps from the DTSDF, making it a true drop-in replacement for the regular TSDF in established trackers. We evaluate and show, that our method increases re-usability of mapped scenes. Furthermore, we add color integration which notably improves color-correctness at adjacent surfaces.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"128 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124242275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Broughton, Pavel Linder, Tomáš Rouček, Tomáš Vintr, T. Krajník
{"title":"Robust Image Alignment for Outdoor Teach-and-Repeat Navigation","authors":"G. Broughton, Pavel Linder, Tomáš Rouček, Tomáš Vintr, T. Krajník","doi":"10.1109/ecmr50962.2021.9568832","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568832","url":null,"abstract":"Visual Teach-and-Repeat robot navigation suffers from environmental changes over time, and it struggles in real-world long-term deployments. We propose a robust robot bearing correction method based on traditional principles aided by exploiting the abstraction from higher layers of widely available pre-trained Convolutional Neural Networks (CNNs). Our method applies a two-dimensional Discrete Fast Fourier Transform based approach over several different convolution filters from higher levels of a CNN to robustly estimate the alignment between two corresponding images. The method also estimates its uncertainty, which is essential for the navigation system to decide how much it can trust the bearing correction. We show that our \"learning-free\" method is comparable with the state-of-the-art methods when the environmental conditions are changed only slightly, but it out-performs them at night.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114574777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[Copyright notice]","authors":"","doi":"10.1109/ecmr50962.2021.9568831","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568831","url":null,"abstract":"","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124000746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antea Hadviger, Igor Cvisic, Ivan Markovi'c, Sacha Vrazic, Ivan Petrovi'c
{"title":"Feature-based Event Stereo Visual Odometry","authors":"Antea Hadviger, Igor Cvisic, Ivan Markovi'c, Sacha Vrazic, Ivan Petrovi'c","doi":"10.1109/ecmr50962.2021.9568811","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568811","url":null,"abstract":"Event-based cameras are biologically inspired sensors that output events, i.e., asynchronous pixel-wise brightness changes in the scene. Their high dynamic range and temporal resolution of a microsecond makes them more reliable than standard cameras in environments of challenging illumination and in high-speed scenarios, thus developing odometry algorithms based solely on event cameras offers exciting new possibilities for autonomous systems and robots. In this paper, we propose a novel stereo visual odometry method for event cameras based on feature detection and matching with careful feature management, while pose estimation is done by feature reprojection error minimization. We evaluate the performance of the proposed method on two publicly available datasets: MVSEC sequences captured by an indoor flying drone and DSEC outdoor driving sequences. MVSEC offers accurate ground truth from motion capture, while for DSEC, which does not offer ground truth, in order to obtain a reference trajectory on the standard camera frames we used our SOFT visual odometry, one of the highest ranking algorithms on the KITTI scoreboards. We compared our method to the ESVO method, which is the first and still the only stereo event odometry method, showing on par performance on both MVSEC and DSEC sequences. Furthermore, two important advantages of our method over ESVO are that it adapts tracking frequency to the asynchronous event rate and does not require initialization.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127683441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Best Axes Composition: Multiple Gyroscopes IMU Sensor Fusion to Reduce Systematic Error","authors":"M. Faizullin, G. Ferrer","doi":"10.1109/ecmr50962.2021.9568800","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568800","url":null,"abstract":"In this paper, we propose an algorithm to combine multiple cheap Inertial Measurement Unit (IMU) sensors to calculate 3D-orientations accurately. Our approach takes into account the inherent and non-negligible systematic error in the gyroscope model and provides a solution based on the error observed during previous instants of time. Our algorithm, the Best Axes Composition (BAC), chooses dynamically the most fitted axes among IMUs to improve the estimation performance. We compare our approach with a probabilistic Multiple IMU (MIMU) approach, and we validate our algorithm in our collected dataset. As a result, it only takes as few as 2 IMUs to significantly improve accuracy, while other MIMU approaches need a higher number of sensors to achieve the same results.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128085416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiduo Wang, Milad Ramezani, Matías Mattamala, M. Fallon
{"title":"Scalable and Elastic LiDAR Reconstruction in Complex Environments Through Spatial Analysis","authors":"Yiduo Wang, Milad Ramezani, Matías Mattamala, M. Fallon","doi":"10.1109/ecmr50962.2021.9568844","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568844","url":null,"abstract":"This paper presents novel strategies for spawning and fusing submaps within an elastic dense 3D reconstruction system. The proposed system uses spatial understanding of the scanned environment to control memory usage growth by fusing overlapping submaps in different ways. This allows the number of submaps and memory consumption to scale with the size of the environment rather than the duration of exploration. By analysing spatial overlap, our system segments distinct spaces, such as rooms and stairwells on the fly during exploration. Additionally, we present a new mathematical formulation of relative uncertainty between poses to improve the global consistency of the reconstruction. Performance is demonstrated using a multi-floor multi-room indoor experiment, a large-scale outdoor experiment and simulated datasets. Relative to our baseline, the presented approach demonstrates improved scalability and accuracy.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128607036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Be your own Benchmark: No-Reference Trajectory Metric on Registered Point Clouds","authors":"A. Kornilova, G. Ferrer","doi":"10.1109/ECMR50962.2021.9568822","DOIUrl":"https://doi.org/10.1109/ECMR50962.2021.9568822","url":null,"abstract":"This paper addresses the problem of assessing trajectory quality in conditions when no ground truth poses are available or when their accuracy is not enough for the specific task — for example, small-scale mapping in outdoor scenes. In our work, we propose a no-reference metric, Mutually Orthogonal Metric (MOM), that estimates the quality of the map from registered point clouds via the trajectory poses. MOM strongly correlates with full-reference trajectory metric Relative Pose Error, making it a trajectory benchmarking tool on setups where 3D sensing technologies are employed. We provide a mathematical foundation for such correlation and confirm it statistically in synthetic environments. Furthermore, since our metric uses a subset of points from mutually orthogonal surfaces, we provide an algorithm for the extraction of such subset and evaluate its performance in synthetic CARLA environment and on KITTI dataset. The code of the proposed metric is publicly available as pip-package.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123786055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elia Bonetto, Pascal Goldschmid, Michael J. Black, Aamir Ahmad
{"title":"Active Visual SLAM with Independently Rotating Camera","authors":"Elia Bonetto, Pascal Goldschmid, Michael J. Black, Aamir Ahmad","doi":"10.1109/ECMR50962.2021.9568791","DOIUrl":"https://doi.org/10.1109/ECMR50962.2021.9568791","url":null,"abstract":"In active Visual-SLAM (V-SLAM), a robot relies on the information retrieved by its cameras to control its own movements for autonomous mapping of the environment. Cameras are usually statically linked to the robot’s body, limiting the extra degrees of freedom for visual information acquisition. In this work, we overcome the aforementioned problem by introducing and leveraging an independently rotating camera on the robot base. This enables us to continuously control the heading of the camera, obtaining the desired optimal orientation for active V-SLAM, without rotating the robot itself. However, this additional degree of freedom introduces additional estimation uncertainties, which need to be accounted for. We do this by extending our robot’s state estimate to include the camera state and jointly estimate the uncertainties. We develop our method based on a state-of-the-art active V-SLAM approach for omnidirectional robots and evaluate it through rigorous simulation and real robot experiments. We obtain more accurate maps, with lower energy consumption, while maintaining the benefits of the active approach with respect to the baseline. We also demonstrate how our method easily generalizes to other non-omnidirectional robotic platforms, which was a limitation of the previous approach. Code and implementation details are provided as open-source.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126363324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}