2019 European Conference on Mobile Robots (ECMR)最新文献

筛选
英文 中文
Overview of a Robot for a Neuromuscular Training – RoboTrainer 神经肌肉训练机器人概述- RoboTrainer
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870955
Denis Stogl, B. Hein, M. Mende
{"title":"Overview of a Robot for a Neuromuscular Training – RoboTrainer","authors":"Denis Stogl, B. Hein, M. Mende","doi":"10.1109/ECMR.2019.8870955","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870955","url":null,"abstract":"The shortage on caregivers in the aging western societies has motivated many research groups to develop smart assistive devices for rehabilitation and gait assistance. Such devices are designed for the use by elderly people or persons with disabilities. In our previous work we have presented a smart walker for training of persons with mild cognitive impairment to mobilize their cognitive reserves using motor activation. In this work we present the second prototype of the robot-based device for neuromuscular training called RoboTrainer. Compared to the previous device, RoboTrainer is mechanically adaptable to its user regarding the support area and height of the handles. It uses adaptive control for more intuitive interaction and has additional sensors for observing the state of a user and obstacles in the environment. The presented device is currently in assembly phase with all mechatronical modules tested and configured. In this paper we share design decisions and concepts of RoboTrainer and discuss mechatronical and software components in detail by comparing them to similar state-of-the-art devices. The observations in this paper are a valuable input for the design of future smart walkers.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115355562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Long-Horizon Active SLAM system for multi-agent coordinated exploration 多智能体协同探索的长视界主动SLAM系统
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870952
M. Ossenkopf, Gastón I. Castro, Facundo Pessacg, K. Geihs, P. Cristóforis
{"title":"Long-Horizon Active SLAM system for multi-agent coordinated exploration","authors":"M. Ossenkopf, Gastón I. Castro, Facundo Pessacg, K. Geihs, P. Cristóforis","doi":"10.1109/ECMR.2019.8870952","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870952","url":null,"abstract":"Exploring efficiently an unknown environment with several autonomous agents is a challenging task. In this work we propose an multi-agent Active SLAM method that is able to evaluate a long planning horizon of actions and perform exploration while maintaining estimation uncertainties bounded. Candidate actions are generated using a variant of the Rapidly exploring Random Tree approach (RRT*) followed by a joint entropy minimization to select a path. Entropy estimation is performed in two stages, a short horizon evaluation is carried using exhaustive filter updates while entropy in long horizons is approximated considering reductions on predicted loop closures between robot trajectories. We pursue a fully decentralized exploration approach to cope with typical uncertainties in multiagent coordination. We performed simulations for decentralized exploration planning, which is both dynamically adapting to new situations as well as concerning long horizon plans.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115674680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
6DoF Pose-Estimation Pipeline for Texture-less Industrial Components in Bin Picking Applications 无纹理工业部件的6DoF姿态估计流水线
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870920
Andreas Blank, M. Hiller, Siyi Zhang, Alexander Leser, M. Metzner, M. Lieret, J. Thielecke, J. Franke
{"title":"6DoF Pose-Estimation Pipeline for Texture-less Industrial Components in Bin Picking Applications","authors":"Andreas Blank, M. Hiller, Siyi Zhang, Alexander Leser, M. Metzner, M. Lieret, J. Thielecke, J. Franke","doi":"10.1109/ECMR.2019.8870920","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870920","url":null,"abstract":"Over the next few years, autonomous robots and functionalities are expected to gain increased importance for the shop floor. Perception and the derivation of autonomous behavior is of crucial importance in this context. We present a combined object recognition and pose estimation pipeline to generate pose estimates with six degrees of freedom (6DoF) for bin picking, specifically targeting the suitability for challenging scenarios with texture-less, metallic parts in industrial environments. The pipeline is based on open source algorithms, combining Convolutional Neural Networks (CNNs) and feature-matching methods to create an effective 6DoF pose estimate. We evaluate our approach on several industrial components using a articulated arm robot to guarantee a high level of comparability during the different measurement runs. We further quantify the results using known error metrics for pose estimation, compare the results to established approaches and provide statistical insight into the achieved outcomes to assess the robustness and reliability.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128998866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Real-time Person Orientation Estimation using Colored Pointclouds 实时人的方向估计使用彩色点云
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870914
Tim Wengefeld, Benjamin Lewandowski, Daniel Seichter, Lennard Pfennig, H. Groß
{"title":"Real-time Person Orientation Estimation using Colored Pointclouds","authors":"Tim Wengefeld, Benjamin Lewandowski, Daniel Seichter, Lennard Pfennig, H. Groß","doi":"10.1109/ECMR.2019.8870914","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870914","url":null,"abstract":"Robustly estimating the orientations of people is a crucial precondition for a wide range of applications. Especially for autonomous systems operating in populated environments, the orientation of a person can give valuable information to increase their acceptance. Given people's orientations, mobile systems can apply navigation strategies which take people's proxemics into account or approach them in a human like manner to perform human robot interaction (HRI) tasks. In this paper, we present an approach for person orientation estimation based on performant features extracted from colored point clouds, formerly used for a two class person attribute classification. The classification approach has been extended to the continuous domain while treating the problem of orientation estimation in real time. We compare the performance of orientation estimation treated as a multi-class as well as a regression problem. The proposed approach achieves a mean angular error (MAE) of 15.4° at 14.3ms execution time and can be further tuned to 12.2° MAE with 79.8ms execution time. This can compete with accuracies from state-of-the-art and even deep learning based skeleton estimation approaches while retaining the real-time capability on a standard CPU.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133851646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
An Integral-Model Predictive Controller with Finite Memory for Trajectory Tracking 基于有限记忆的轨迹跟踪积分模型预测控制器
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870911
C. U. Doğruer
{"title":"An Integral-Model Predictive Controller with Finite Memory for Trajectory Tracking","authors":"C. U. Doğruer","doi":"10.1109/ECMR.2019.8870911","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870911","url":null,"abstract":"In this paper, an integral-model predictive control (i-MPC) scheme with finite-memory was proposed to track a time-varying signal. It has been shown that with the use of the so-called i-MPC, the persistent steady-state error can be made smaller. In order to investigate its performance, the so-called i-MPC was used to steer a robot along a reference path. It has been shown that time-varying signal tracking performance and convergence characteristics of the so-called i-MPC scheme is better than that of a regular model predictive control and a regular model predictive control with an integral action.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131414242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ship Hull Repair Using a Swarm of Autonomous Underwater Robots: A Self-Assembly Algorithm 利用一群自主水下机器人进行船体修理:一种自组装算法
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870910
Matthew Haire, Xu Xu, L. Alboul, J. Penders, Hongwei Zhang
{"title":"Ship Hull Repair Using a Swarm of Autonomous Underwater Robots: A Self-Assembly Algorithm","authors":"Matthew Haire, Xu Xu, L. Alboul, J. Penders, Hongwei Zhang","doi":"10.1109/ECMR.2019.8870910","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870910","url":null,"abstract":"When ships suffer hull damage at sea, quick and effective repairs are vital. In these scenarios where even minutes make a substantial difference, repair crews need effective solutions suited to modern challenges. In this paper, we propose a self-assembly algorithm to be used by a homogeneous swarm of autonomous underwater robots to aggregate at the hull breach and use their bodies to form a patch of appropriate size to cover the hole. Our approach is inspired by existing modular robot technologies and techniques, which are used to justify the feasibility of the proposed system in this paper. We test the ability of the agents to form a patch for various breach sizes and locations and investigate the effect of varying population density. The system is verified within the two-dimensional Netlogo simulation environment and shows how the system performance can be quantified in relation to the sizes of the breach and the swarm. The methodology and simulation results illustrate that the swarm robot approach presented in this paper forms an important contribution to the emergency ship hull repair scenario and compares advantageously against the traditional shoring methods. We conclude by suggesting how the approach may be extended to a three-dimensional domain to aid real-time implementation in the future.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129073397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards combining a neocortex model with entorhinal grid cells for mobile robot localization 新皮层模型与内嗅网格细胞相结合的移动机器人定位研究
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-09-01 DOI: 10.1109/ECMR.2019.8870939
Stefan Schubert, Peer Neubert, P. Protzel
{"title":"Towards combining a neocortex model with entorhinal grid cells for mobile robot localization","authors":"Stefan Schubert, Peer Neubert, P. Protzel","doi":"10.1109/ECMR.2019.8870939","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870939","url":null,"abstract":"Motion and navigation are fundamental abilities of all terrestrial animals. It is essential for foraging, reproduction, and more generally for survival. There are a couple of strategies to conduct navigation from simpler visual homing in ants to more complex and cognitive demanding techniques in mammals. Many species of mammals use several specialized cell types in the hippocampus and the entorhinal cortex to represent space in the brain like head direction cells to encode their orientation and grid cells to keep track of their position. In our recent work, we presented MCN - an algorithm that is inspired by working principles of the human neocortex for the navigational subtask visual place recognition. MCN makes decisions based merely on camera data without odometry about whether or not a currently visited place has been seen in the past. In this work, we intend to answer the question if we can combine our neocortex-inspired model with entorhinal cortex cells for space representation to exploit additional metric data like odometry in our system. We believe that the combination of bio-inspired techniques could help someday to create a biologically plausible and more robust navigation system like in animals. In this paper, we give an introduction to our neocortex-inspired algorithm MCN and to two cell types of the entorhinal cortex, answer how these concepts can be combined to perform visual place recognition, and provide proof-of-concept experiments with a mobile robot to show the performance of the proposed system.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125735720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Object-RPE: Dense 3D Reconstruction and Pose Estimation with Convolutional Neural Networks for Warehouse Robots 目标- rpe:基于卷积神经网络的仓库机器人密集三维重建和姿态估计
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-08-22 DOI: 10.1109/ECMR.2019.8870927
Dinh-Cuong Hoang, Todor Stoyanov, A. Lilienthal
{"title":"Object-RPE: Dense 3D Reconstruction and Pose Estimation with Convolutional Neural Networks for Warehouse Robots","authors":"Dinh-Cuong Hoang, Todor Stoyanov, A. Lilienthal","doi":"10.1109/ECMR.2019.8870927","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870927","url":null,"abstract":"We present a system for accurate 3D instance-aware semantic reconstruction and 6D pose estimation, using an RGB-D camera. Our framework couples convolutional neural networks (CNNs) and a state-of-the-art dense Simultaneous Localisation and Mapping (SLAM) system, ElasticFusion, to achieve both high-quality semantic reconstruction as well as robust 6D pose estimation for relevant objects. The method presented in this paper extends a high-quality instance-aware semantic 3D Mapping system from previous work [1] by adding a 6D object pose estimator. While the main trend in CNN-based 6D pose estimation has been to infer object's position and orientation from single views of the scene, our approach explores performing pose estimation from multiple viewpoints, under the conjecture that combining multiple predictions can improve the robustness of an object detection system. The resulting system is capable of producing high-quality object-aware semantic reconstructions of room-sized environments, as well as accurately detecting objects and their 6D poses. The developed method has been verified through experimental validation on the YCB-Video dataset and a newly collected warehouse object dataset. Experimental results confirmed that the proposed system achieves improvements over state-of-the-art methods in terms of surface reconstruction and object pose prediction. Our code and video are available at https://sites.google.com/view/object-rpe.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131046719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Loop Closure Detection in Closed Environments 封闭环境中的闭环检测
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-08-13 DOI: 10.1109/ECMR.2019.8870938
Nils Rottmann, R. Bruder, A. Schweikard, Elmar Rueckert
{"title":"Loop Closure Detection in Closed Environments","authors":"Nils Rottmann, R. Bruder, A. Schweikard, Elmar Rueckert","doi":"10.1109/ECMR.2019.8870938","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870938","url":null,"abstract":"Low cost robots, such as vacuum cleaners or lawn mowers employ simplistic and often random navigation policies. Although a large number of sophisticated mapping and planning approaches exist, they require additional sensors like LIDAR sensors, cameras or time of flight sensors. In this work, we propose a loop closure detection method based only on odometry data which can be generated using low-range or binary signal sensors together with simple wall following techniques. We show how to include the detected loop closing constraints into a pose graph formulation such that standard pose graph optimization techniques can be used for map estimation. We evaluate our map estimate and loop closure approach using both, simulation and a real lawn mower in complex and realistic environments. Our results demonstrate that our approach generates accurate map estimates on the basis of odometry data only. We further show that our assumption about the discriminative nature of neighboring poses in the pose graph is solid, even under large odometry noise. These improved map estimates provide the basis for smart navigation policies in low cost robots and extends their abilities to goal-directed behavior like pick and place or complete coverage path planning in realistic environments.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132430113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Vision-based Navigation Using Deep Reinforcement Learning 使用深度强化学习的基于视觉的导航
2019 European Conference on Mobile Robots (ECMR) Pub Date : 2019-08-08 DOI: 10.1109/ECMR.2019.8870964
Jonás Kulhánek, Erik Derner, T. D. Bruin, R. Babuška
{"title":"Vision-based Navigation Using Deep Reinforcement Learning","authors":"Jonás Kulhánek, Erik Derner, T. D. Bruin, R. Babuška","doi":"10.1109/ECMR.2019.8870964","DOIUrl":"https://doi.org/10.1109/ECMR.2019.8870964","url":null,"abstract":"Deep reinforcement learning (RL) has been successfully applied to a variety of game-like environments. However, the application of deep RL to visual navigation with realistic environments is a challenging task. We propose a novel learning architecture capable of navigating an agent, e.g. a mobile robot, to a target given by an image. To achieve this, we have extended the batched A2C algorithm with auxiliary tasks designed to improve visual navigation performance. We propose three additional auxiliary tasks: predicting the segmentation of the observation image and of the target image and predicting the depth-map. These tasks enable the use of supervised learning to pre-train a major part of the network and to reduce the number of training steps substantially. The training performance has been further improved by increasing the environment complexity gradually over time. An efficient neural network structure is proposed, which is capable of learning for multiple targets in multiple environments. Our method navigates in continuous state spaces and on the AI2-THOR environment simulator surpasses the performance of state-of-the-art goal-oriented visual navigation methods from the literature.","PeriodicalId":435630,"journal":{"name":"2019 European Conference on Mobile Robots (ECMR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134282307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信