2021 European Conference on Mobile Robots (ECMR)最新文献

筛选
英文 中文
Open-Source Tools for Efficient ROS and ROS2-based 2D Human-Robot Interface Development 高效ROS和基于ROS的2D人机界面开发的开源工具
2021 European Conference on Mobile Robots (ECMR) Pub Date : 2021-08-01 DOI: 10.1109/ecmr50962.2021.9568801
Stefan Fabian, O. Stryk
{"title":"Open-Source Tools for Efficient ROS and ROS2-based 2D Human-Robot Interface Development","authors":"Stefan Fabian, O. Stryk","doi":"10.1109/ecmr50962.2021.9568801","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568801","url":null,"abstract":"2D human-robot interfaces (HRI) are a key component of most robotic systems with an (optional) teleoperation component. However, creating such an interface is often cumbersome and time-consuming since most user interface frameworks require recompilation on each change or the writing of extensive boilerplate code even for simple interfaces. In this paper, we introduce five open-source packages, namely, the ros(2)_babel_fish packages, the qml_ros(2)_plugin packages, and the hector_rviz_overlay package. These packages enable the creation of visually appealing end-user or functionality-oriented diagnostic interfaces for ROS- and ROS2-based robots in a simple and quick fashion using the QtWidget or QML user interface framework. Optionally, rendering the interface as an overlay of the 3D scene of the robotics visualization tool RViz enables developers to leverage existing extensive data visualization capabilities.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115855004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Fabrication of a Low-Cost 6 DoF Underwater Vehicle 低成本六自由度水下航行器的设计与制造
2021 European Conference on Mobile Robots (ECMR) Pub Date : 2021-08-01 DOI: 10.1109/ecmr50962.2021.9568805
Hamza Muzammal, S. Mehdi, M. Hanif, F. Maurelli
{"title":"Design and Fabrication of a Low-Cost 6 DoF Underwater Vehicle","authors":"Hamza Muzammal, S. Mehdi, M. Hanif, F. Maurelli","doi":"10.1109/ecmr50962.2021.9568805","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568805","url":null,"abstract":"To perform an underwater mission and observe the environment, a vehicle must be designed with multiple sensors and appropriate structural strength to bear challenging underwater conditions. This paper discusses the design and fabrication of such a vehicle to have a low-cost tag than its contemporary counterparts available in the market. Off-the-shelf thrusters are used to control the vehicle in 6 Degrees of Freedom. In the absence of GPS and radio signal capabilities, an Ethernet cable tethered connection is used for communication between the vehicle and control station on the ground. The camera can stream live video to the ground control station for control or mapping during the implementation of the SLAM algorithm. To minimize the cost of fabrication, commercially available materials were used. The vehicle’s uniqueness lies in its simplicity, low-cost, and ability to be integrated with a range of sensors depending upon the mission requirements.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131376753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Industrial Manometer Detection and Reading for Autonomous Inspection Robots 用于自主检测机器人的工业压力计检测和读取
2021 European Conference on Mobile Robots (ECMR) Pub Date : 2021-08-01 DOI: 10.1109/ecmr50962.2021.9568833
Jonas Günther, Martin Oehler, S. Kohlbrecher, O. Stryk
{"title":"Industrial Manometer Detection and Reading for Autonomous Inspection Robots","authors":"Jonas Günther, Martin Oehler, S. Kohlbrecher, O. Stryk","doi":"10.1109/ecmr50962.2021.9568833","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568833","url":null,"abstract":"Autonomous mobile robots for industrial inspection can reduce cost for digitalization of existing plants by performing autonomous routine inspections. A frequent task is reading of analog gauges to monitor the health of the facility. Automating this process involves capturing image data with a camera sensor and processing the data to read the value. Detection algorithms deployed on a mobile robot have to deal with increased uncertainty regarding localization and environmental influences. This imposes increased requirements regarding robustness to viewing angle, lighting and scale variation on detection and reading. Current approaches based on conventional computer vision require high quality images or prior knowledge. We address these limitations by leveraging the advances of neural networks in the task of object detection and instance segmentation in a two-stage pipeline. Our method robustly detects and reads manometers without prior knowledge of object location or exact object type. In our evaluation we show that our approach can detect and read manometers from a distance of up to 3m and a viewing angle of up to 60° in different lighting conditions with needle angle estimation errors of ±2.2°. We publish the validation split of our training dataset for manometer and needle detection at https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/2881.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"8 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120839452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparative Assessment of Parcel Box Detection Algorithms for Industrial Applications 工业应用中包裹箱检测算法的比较评估
2021 European Conference on Mobile Robots (ECMR) Pub Date : 2021-08-01 DOI: 10.1109/ecmr50962.2021.9568825
E. Fontana, William Zarotti, Dario Lodi Rizzini
{"title":"A Comparative Assessment of Parcel Box Detection Algorithms for Industrial Applications","authors":"E. Fontana, William Zarotti, Dario Lodi Rizzini","doi":"10.1109/ecmr50962.2021.9568825","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568825","url":null,"abstract":"Industrial logistics may benefit from object perception to perform flexible and efficient management of goods. This paper illustrates and experimentally compares two approaches to parcel box detection in depth images for an industrial depalletization task. The model-based method detects clusters in the input point cloud according to curvature and other geometric features, and aggregates the candidate objects. The learning-based method relies on the state-of-the-art Mask R-CNN, which has been re-trained on an acquired dataset with missing measurements. The target object poses are evaluated through standard geometric registration. The experiments on acquired datasets show the feasibility of the two approaches.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129207527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Vision-based Autonomous Crop Row Navigation for Wheeled Mobile Robots using Super-twisting Sliding Mode Control 基于视觉的轮式移动机器人超扭转滑模自主作物行导航
2021 European Conference on Mobile Robots (ECMR) Pub Date : 2021-08-01 DOI: 10.1109/ecmr50962.2021.9568819
Gustavo B. P. Barbosa, Eduardo C. Da Silva, A. C. Leite
{"title":"Vision-based Autonomous Crop Row Navigation for Wheeled Mobile Robots using Super-twisting Sliding Mode Control","authors":"Gustavo B. P. Barbosa, Eduardo C. Da Silva, A. C. Leite","doi":"10.1109/ecmr50962.2021.9568819","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568819","url":null,"abstract":"This work presents a new robust image-based visual servoing (rIBVS) approach for wheeled mobile robots (WMRs) endowed with a single monocular camera to carry out autonomous navigation in row crop fields. Then, we design a robust vision-based controller by using the super-twisting algorithm (STA) approach to stabilize the robot motion in the presence of model inaccuracies caused by imperfect camera calibration, and trajectory perturbations due to different plant distributions and high robot driving velocities. The rIBVS approach switches between column and row visual primitives extracted from the images, allowing WMRs to execute the navigation task in two phases: the crop row reaching and the crop row following. To illustrate the effectiveness and feasibility of the proposed control methodology, 3D computer simulations are executed in the ROS-Gazebo simulator using a differential-drive mobile robot (DDMR) navigating autonomously in an ad-hoc developed row crop agricultural environment.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129142943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Greediness in Map-Predictive Exploration Planning 在地图预测勘探规划中理解贪婪
2021 European Conference on Mobile Robots (ECMR) Pub Date : 2021-08-01 DOI: 10.1109/ecmr50962.2021.9568793
Ludvig Ericson, Daniel Duberg, P. Jensfelt
{"title":"Understanding Greediness in Map-Predictive Exploration Planning","authors":"Ludvig Ericson, Daniel Duberg, P. Jensfelt","doi":"10.1109/ecmr50962.2021.9568793","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568793","url":null,"abstract":"In map-predictive exploration planning, the aim is to exploit a-priori map information to improve planning for exploration in otherwise unknown environments. The use of map predictions in exploration planning leads to exacerbated greediness, as map predictions allow the planner to defer exploring parts of the environment that have low value, e.g., unfinished corners. This behavior is undesirable, as it leaves holes in the explored space by design. To this end, we propose a scoring function based on inverse covisibility that rewards visiting these low-value parts, resulting in a more cohesive exploration process, and preventing excessive greediness in a map-predictive setting. We examine the behavior of a non-greedy map-predictive planner in a bare-bones simulator, and answer two principal questions: a) how far beyond explored space should a map predictor predict to aid exploration, i.e., is more better; and b) does shortest-path search as the basis for planning, a popular choice, cause greediness. Finally, we show that by thresholding covisibility, the user can trade-off greediness for improved early exploration performance.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128742599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CorAl – Are the point clouds Correctly Aligned? 珊瑚-点云是否正确对齐?
2021 European Conference on Mobile Robots (ECMR) Pub Date : 2021-08-01 DOI: 10.1109/ecmr50962.2021.9568846
Daniel Adolfsson, Martin Magnusson, Qianfang Liao, A. Lilienthal, Henrik Andreasson
{"title":"CorAl – Are the point clouds Correctly Aligned?","authors":"Daniel Adolfsson, Martin Magnusson, Qianfang Liao, A. Lilienthal, Henrik Andreasson","doi":"10.1109/ecmr50962.2021.9568846","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568846","url":null,"abstract":"In robotics perception, numerous tasks rely on point cloud registration. However, currently there is no method that can automatically detect misaligned point clouds reliably and without environment-specific parameters. We propose \"CorAl\", an alignment quality measure and alignment classifier for point cloud pairs, which facilitates the ability to introspectively assess the performance of registration. CorAl compares the joint and the separate entropy of the two point clouds. The separate entropy provides a measure of the entropy that can be expected to be inherent to the environment. The joint entropy should therefore not be substantially higher if the point clouds are properly aligned. Computing the expected entropy makes the method sensitive also to small alignment errors, which are particularly hard to detect, and applicable in a range of different environments. We found that CorAl is able to detect small alignment errors in previously unseen environments with an accuracy of 95% and achieve a substantial improvement to previous methods.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128516933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
S-AvE: Semantic Active Vision Exploration and Mapping of Indoor Environments for Mobile Robots S-AvE:移动机器人室内环境语义主动视觉探索与映射
2021 European Conference on Mobile Robots (ECMR) Pub Date : 2021-08-01 DOI: 10.1109/ecmr50962.2021.9568806
José V. Jaramillo, R. Capobianco, Francesco Riccio, D. Nardi
{"title":"S-AvE: Semantic Active Vision Exploration and Mapping of Indoor Environments for Mobile Robots","authors":"José V. Jaramillo, R. Capobianco, Francesco Riccio, D. Nardi","doi":"10.1109/ecmr50962.2021.9568806","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568806","url":null,"abstract":"In order to operate and to understand human commands, robots must be provided with a knowledge representation integrating both geometric and symbolic knowledge. In the literature, such a representation is referred to as a semantic map that enables the robot to interpret user commands by grounding them to its sensory observations. However, even though a semantic map is key to enable cognition and high-level reasoning, it is a complex challenge to address due to generalization to various scenarios. As a consequence, commonly used techniques do not always guarantee rich and accurate representations of the environment and of the objects therein. In this paper, we set aside from previous approaches by attacking the problem of semantic mapping from a different perspective. While proposed approaches mainly focus on generating a reliable map starting from sensory observations often collected with a human user teleoperating the mobile platform, in this paper, we argue that the process of semantic mapping starts at the data gathering phase and it is a combination of both perception and motion. To tackle these issues, we design a new family of approaches to semantic mapping that exploit both active vision and domain knowledge to improve the overall mapping performance with respect to other map-exploration methodologies.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116718501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
What is my Robot Doing? Remote Supervision to Support Robots for Older Adults Independent Living: a Field Study 我的机器人在做什么?远程监控支持老年人独立生活的机器人:实地研究
2021 European Conference on Mobile Robots (ECMR) Pub Date : 2021-08-01 DOI: 10.1109/ecmr50962.2021.9568839
M. Luperto, M. Romeo, J. Monroy, Alessandro Vuono, Nicola Basilico, J. González, N. A. Borghese
{"title":"What is my Robot Doing? Remote Supervision to Support Robots for Older Adults Independent Living: a Field Study","authors":"M. Luperto, M. Romeo, J. Monroy, Alessandro Vuono, Nicola Basilico, J. González, N. A. Borghese","doi":"10.1109/ecmr50962.2021.9568839","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568839","url":null,"abstract":"In an ageing society, the at-home use of Socially Assistive Robots (SARs) could provide remote monitoring of their users’ well-being, together with physical and psychological support. However, private home environments are particularly challenging for SARs, due to their unstructured and dynamic nature which often contributes to robots’ failures. For this reason, even though several prototypes of SARs for elderly care have been developed, their commercialization and widespread at-home use are yet to be effective. In this paper, we analyze the impact of introducing a novel web-based Monitoring and Logging System (MLS) on the SARs reliability and user acceptance. This monitoring framework, specifically designed for remote supervision and control of SAR-based systems in older adults’ apartments, also allows exchanging feedback between caregivers, technicians, and older adults, to better explain the SAR-based systems’ behaviours. The MLS was developed, tested, and evaluated within the pilot study of the H2020 project MoveCare, where 13 autonomous SARs were deployed in the house of older adults living alone and remotely monitored for over 180 weeks. The results from this field trial suggest that the use of the MLS during the pilot increased the acceptance of the SAR-based system in the event of failures and anomalies.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116191248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Conversion of depth images into planar laserscans considering obstacle height for collision free 2D robot navigation 无碰撞机器人导航中考虑障碍物高度的深度图像到平面激光扫描图像的转换
2021 European Conference on Mobile Robots (ECMR) Pub Date : 2021-08-01 DOI: 10.1109/ecmr50962.2021.9568795
Stephan Sandfuchs, Moritz P. Heimbach, J. Weber, M. Schmidt
{"title":"Conversion of depth images into planar laserscans considering obstacle height for collision free 2D robot navigation","authors":"Stephan Sandfuchs, Moritz P. Heimbach, J. Weber, M. Schmidt","doi":"10.1109/ecmr50962.2021.9568795","DOIUrl":"https://doi.org/10.1109/ecmr50962.2021.9568795","url":null,"abstract":"Mobile robots have become popular in many application areas over the last few decades. In order to perceive their environment in which they move autonomously, various sensors such as depth cameras are used. The processing of 3D information from depth images is very computationally expensive due to the large amount of data. However, for many mobile robots, navigation in a simplified 2D world is sufficient. For this purpose, the depth images of the environment can first be reduced to 2D information in the form of a laserscan line and then processed with algorithms for localization and mapping. This paper improves an existing algorithm that converts depth images into laserscans and tests it on a real robot in a real world scenario. In contrast to the original algorithm, the improved algorithm considers 3D information such as the height of the robot and obstacles when creating the laserscan line. This allows a mobile robot to navigate in a simplified 2D world without colliding with obstacles in the real 3D world. Since processing the 3D information is computationally expensive, the algorithm was optimized to be executable on low-cost single-board computers.","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114633351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信