2019 European Conference on Mobile Robots (ECMR)最新文献

筛选
英文 中文
Localization of Humans in Warehouse based on Rack Detection 基于货架检测的仓库人员定位
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870913
Karel Kosnar, Gaël Écorchard, L. Preucil
{"title":"Localization of Humans in Warehouse based on Rack Detection","authors":"Karel Kosnar, Gaël Écorchard, L. Preucil","doi":"10.1109/ECMR.2019.8870913","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870913","url":null,"abstract":"This paper proposes a method for human localization in the automated warehouses. As the environment is highly dynamic due to changes in the racks content as well as the position of the rack, the proposed method uses the racks as landmarks for the localization. The fixed structure of the shelves and stands of the rack is used to detect the rack itself regardless of the content. The structure is coded as the graph and the detection is performed by finding the isomorphism between the rack model and the detected structure of the environment. The relative position is obtained by solving a PnP problem. The global position of the rack is gathered from the warehouse system by reading the marker identifying the rack. Experiments show that the localization has a mean error of 15 cm and is much more robust than the localization based on the marker itself.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123241836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Time-varying Pedestrian Flow Models for Service Robots 服务机器人的时变行人流模型
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870909
Tomáš Vintr, Sergi Molina Mellado, Ransalu Senanayake, G. Broughton, Zhi Yan, Jirí Ulrich, T. Kucner, Chittaranjan Srinivas Swaminathan, Filip Majer, M. Stachová, A. Lilienthal, T. Krajník
{"title":"Time-varying Pedestrian Flow Models for Service Robots","authors":"Tomáš Vintr, Sergi Molina Mellado, Ransalu Senanayake, G. Broughton, Zhi Yan, Jirí Ulrich, T. Kucner, Chittaranjan Srinivas Swaminathan, Filip Majer, M. Stachová, A. Lilienthal, T. Krajník","doi":"10.1109/ECMR.2019.8870909","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870909","url":null,"abstract":"We present a human-centric spatiotemporal model for service robots operating in densely populated environments for long time periods. The method integrates observations of pedestrians performed by a mobile robot at different locations and times into a memory efficient model, that represents the spatial layout of natural pedestrian flows and how they change over time. To represent temporal variations of the observed flows, our method does not model the time in a linear fashion, but by several dimensions wrapped into themselves. This representation of time can capture long-term (i.e. days to weeks) periodic patterns of peoples' routines and habits. Knowledge of these patterns allows making long-term predictions of future human presence and walking directions, which can support mobile robot navigation in human-populated environments. Using datasets gathered for several weeks, we compare the model to state-of-the-art methods for pedestrian flow modelling.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127478100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Campus Guide: A Lidar-based Mobile Robot 校园指南:基于激光雷达的移动机器人
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870916
Minghao Liu, Zhixing Hou, Zezhou Sun, Ning Yin, Hang Yang, Ying Wang, Zhiqiang Chu, Hui Kong
{"title":"Campus Guide: A Lidar-based Mobile Robot","authors":"Minghao Liu, Zhixing Hou, Zezhou Sun, Ning Yin, Hang Yang, Ying Wang, Zhiqiang Chu, Hui Kong","doi":"10.1109/ECMR.2019.8870916","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870916","url":null,"abstract":"There has been a large variety of mobile robots developed to demonstrate navigation capabilities on rural roads or highways. However a small amount of work focuses on autonomous navigation in densely populated areas depending only on a 16-line LiDAR. In this paper, we present an autonomous navigation system for mobile robot which can be used for delivery, cleaning, surveillance and so on in industrial areas and campus where the environment is unstructured and dynamic. The perception of the environment relies on the 16-line LiDAR installed on top of the mobile robot. The proposed system is composed of three components: LiDAR-based mapping and re-localization module, traversable path and obstacle detection module, and path planning and trajectory tracking module. We tested our system on a real mobile robot that can run autonomously on a campus in a complex environment.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114318190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Sensor-based algorithm for collision-free avoidance of mobile robots in complex dynamic environments 基于传感器的移动机器人在复杂动态环境中的无碰撞规避算法
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870344
Dimitri Leca, V. Cadenat, T. Sentenac
{"title":"Sensor-based algorithm for collision-free avoidance of mobile robots in complex dynamic environments","authors":"Dimitri Leca, V. Cadenat, T. Sentenac","doi":"10.1109/ECMR.2019.8870344","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870344","url":null,"abstract":"This paper deals with the problem of navigation of unmanned vehicles through poorly known environments cluttered with static and dynamic obstacles. The robot is equipped with a LiDAR able to provide a scan of the surroundings and with classical dedicated localization sensors (odometry, IMU). The proposed navigation strategy relies on: (i) a new concept called Enhanced Laser Scan (ELS), which is built from the current laser scan by adding virtual points along the predicted trajectory of the obstacles; (ii) two sensor-based controllers allowing respectively to reach the goal and to avoid obstacles. These controllers relying on the richer information provided by the ESL, they will be able to anticipate and safely avoid both static and moving obstacles; (iii) a high-level decision process allowing a better choice of the sense-of-motion (SOM) around the obstacle and its reassessment if needed.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126703125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Place Recognition Based on Planar Surfaces Using Multiple RGB-D Images Taken From the same Position 基于同一位置的多幅RGB-D图像的平面位置识别
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870915
R. Cupec, D. Filko, E. Nyarko
{"title":"Place Recognition Based on Planar Surfaces Using Multiple RGB-D Images Taken From the same Position","authors":"R. Cupec, D. Filko, E. Nyarko","doi":"10.1109/ECMR.2019.8870915","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870915","url":null,"abstract":"This paper considers indoor place recognition based on matching of planar surfaces and straight edges extracted from depth images obtained by an RGB-D camera. The idea of using planar surfaces as landmarks for robot localization has already been investigated. In this paper, the advantage of using multiple RGB-D images acquired from the same viewpoint by a camera mounted on a pan-tilt head is addressed. This simple straightforward method of expanding the field of view of a standard RGB-D camera allows 3D models of the observed place to be built, which contain information about relative positions of geometric features that are not contained within a single camera FoV. A high recognition rate is achieved indicating the practical applicability of the investigated approach. A publicly available dataset for the evaluation of place recognition methods is created. Using this dataset, the ability of recognizing places from viewpoints that differ from those from which the model is built can be tested as well as robustness to scene and lighting changes.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126750902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Semantic Interaction in Augmented Reality Environments for Microsoft HoloLens 微软HoloLens增强现实环境中的语义交互
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870937
Peer Schütt, Max Schwarz, Sven Behnke
{"title":"Semantic Interaction in Augmented Reality Environments for Microsoft HoloLens","authors":"Peer Schütt, Max Schwarz, Sven Behnke","doi":"10.1109/ECMR.2019.8870937","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870937","url":null,"abstract":"Augmented Reality is a promising technique for human-machine interaction. Especially in robotics, which always considers systems in their environment, it is highly beneficial to display visualizations and receive user input directly in exactly that environment. We explore this idea using the Microsoft HoloLens, with which we capture indoor environments and display interaction cues with known object classes. The 3D mesh recorded by the HoloLens is annotated online, as the user moves, with semantic classes using a projective approach, which allows us to use a state-of-the-art 2D semantic segmentation method. The results are fused onto the mesh; prominent object segments are identified and displayed in 3D to the user. Finally, the user can trigger actions by gesturing at the object. We both present qualitative results and analyze the accuracy and performance of our method in detail on an indoor dataset.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"233 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126795080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Spatio-Semantic ConvNet-Based Visual Place Recognition 基于空间语义卷积网络的视觉位置识别
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870948
Luis G. Camara, L. Preucil
{"title":"Spatio-Semantic ConvNet-Based Visual Place Recognition","authors":"Luis G. Camara, L. Preucil","doi":"10.1109/ECMR.2019.8870948","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870948","url":null,"abstract":"We present a Visual Place Recognition system that follows the two-stage format common to image retrieval pipelines. The system encodes images of places by employing the activations of different layers of a pre-trained, off-the-shelf, VGG16 Convolutional Neural Network (CNN) architecture. In the first stage of our method and given a query image of a place, a number of top candidate images is retrieved from a previously stored database of places. In the second stage, we propose an exhaustive comparison of the query image against these candidates by encoding semantic and spatial information in the form of CNN features. Results from our approach outperform by a large margin state-of-the-art visual place recognition methods on five of the most commonly used benchmark datasets. The performance gain is especially remarkable on the most challenging datasets, with more than a twofold recognition improvement with respect to the latest published work.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130573478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Tools for Visualizing, Annotating and Storing Triangle Meshes in ROS and RViz 用于在ROS和RViz中可视化、注释和存储三角形网格的工具
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870953
Sebastian Pütz, T. Wiemann, J. Hertzberg
{"title":"Tools for Visualizing, Annotating and Storing Triangle Meshes in ROS and RViz","authors":"Sebastian Pütz, T. Wiemann, J. Hertzberg","doi":"10.1109/ECMR.2019.8870953","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870953","url":null,"abstract":"Polygonal maps for robotic applications are becoming increasingly popular, but are currently not effectively supported in the Robot Operating System (ROS). In this paper, we introduce the Mesh Tools package consisting of message definitions, RViz plugins and a persistence layer to make the benefits of annotated polygonal maps available in ROS. These tools allow to publish, edit and inspect such maps within the existing ROS context. Our persistence layer efficiently loads and stores large mesh maps. We discuss two application areas as a proof-of-concept: Labeling of triangle clusters for semantic mapping and robot navigation on triangle meshes in a typical outdoor environment by integrating our tools into an existing navigation stack.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116901422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Global Localization on OpenStreetMap Using 4-bit Semantic Descriptors 使用4位语义描述符的OpenStreetMap全局定位
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870918
Fan Yan, O. Vysotska, C. Stachniss
{"title":"Global Localization on OpenStreetMap Using 4-bit Semantic Descriptors","authors":"Fan Yan, O. Vysotska, C. Stachniss","doi":"10.1109/ECMR.2019.8870918","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870918","url":null,"abstract":"Localization is an essential capability of mobile vehicles such as robots or autonomous cars. Localization systems that do not rely on GNSS typically require a map of the environment to compare the local sensor readings to the map. In most cases, building such a model requires an explicit mapping phase for recording sensor data in the environment. In this paper, we investigate the problem of localizing a mobile vehicle equipped with a 3D LiDAR scanner, driving on urban roads without mapping the environment beforehand. We propose an approach that builds upon publicly available map information from OpenStreetMap and turns them into a compact map representation that can be used for Monte Carlo localization. This map requires to store only a tiny 4-bit descriptor per location and is still able to globally localize and track a vehicle. We implemented our approach and thoroughly tested it on real-world data using the KITTI datasets. The experiments presented in this paper suggest that we can estimate the vehicle pose effectively only using OpenStreetMap data.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114848986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
On Autonomous Spatial Exploration with Small Hexapod Walking Robot using Tracking Camera Intel RealSense T265 基于英特尔RealSense T265跟踪摄像头的小型六足步行机器人自主空间探索
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870968
J. Bayer, J. Faigl
{"title":"On Autonomous Spatial Exploration with Small Hexapod Walking Robot using Tracking Camera Intel RealSense T265","authors":"J. Bayer, J. Faigl","doi":"10.1109/ECMR.2019.8870968","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870968","url":null,"abstract":"In this paper, we report on the deployment of the combination of commercially available off-the-shelf embedded visual localization system and RGB-D camera in an autonomous robotic exploration performed by small hexapod walking robot. Since the multi-legged walking robot is capable of traversing rough terrains, the addressed exploration problem is to create a map of an unknown environment while simultaneously performing the traversability assessment of the explored environment to efficiently and safely reach next navigational waypoints. The proposed system is targeted to run onboard of small multi-legged robots, and therefore, the system design is focused on computationally efficient approaches using relatively lightweight components. Therefore, we take advantages of the recently introduced tracking camera Intel RealSense T265 and RGB-D camera Intel RealSense D435 that are deployed to our developed autonomous hexapod walking robot that is equipped with adaptive locomotion control. Together with the proposed computationally efficient data representation and traversability assessment, the developed system supports onboard mapping and online decision-making within the exploration strategy even on a platform with low computational capabilities. Based on the reported experimental evaluation of the tracking camera, the developed system provides sufficiently accurate localization, and the robot has been able to explore indoor and outdoor environments fully autonomously.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122591918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信