Explorer51 – Indoor Mapping, Discovery, and Navigation for an Autonomous Mobile Robot

Gabriel Argush, William Holincheck, Jessica Krynitsky, Brian McGuire, D. Scott, Charlie Tolleson, Madhur Behl
{"title":"Explorer51 – Indoor Mapping, Discovery, and Navigation for an Autonomous Mobile Robot","authors":"Gabriel Argush, William Holincheck, Jessica Krynitsky, Brian McGuire, D. Scott, Charlie Tolleson, Madhur Behl","doi":"10.1109/SIEDS49339.2020.9106581","DOIUrl":null,"url":null,"abstract":"The nexus of robotics, autonomous systems, and artificial intelligence (AI) has the potential to change the nature of human guided exploration of indoor and outdoor spaces. Such autonomous mobile robots can be incorporated into a variety of applications, ranging from logistics and maintenance, to intelligence gathering, surveillance, and reconnaissance (ISR). One such example is that of a tele-operator using the robot to generate a map of the inside of a building while discovering and tagging the objects of interest. During this process, the tele-operator can also assign an area for the robot to navigate autonomously or return to a previously marked area/object of interest. Search and rescue and ISR abilities could be immensely improved with such capabilities. The goal of this research is to prototype and demonstrate the above autonomous capabilities in a mobile ground robot called Explorer51. Objectives include: (i) enabling an operator to drive the robot non-line of sight to explore a space by incorporating a first-person view (FPV) system to stream data from the robot to the base station; (ii) implementing automatic collision avoidance to prevent the operator from running the robot into obstacles; (iii) creating and saving 2D and 3D maps of the space in real time by using a 2D laser scanner, tracking, and depth/RGB cameras; (iv) locating and tagging objects of interest as waypoints within the map; (v) autonomously navigate within the map to reach a chosen waypoint.To accomplish these goals, we are using the AION Robotics R1 Unmanned Ground Vehicle (UGV) rover as the platform for Explorer51 to demonstrate the autonomous features. The rover runs the Robot Operating System (ROS) onboard an NVIDIA Jetson TX2 board, connected to a Pixhawk controller. Sensors include a 2D scanning LiDAR, depth camera, tracking camera, and an IMU. Using existing ROS packages such as Cartographer and TEB planner, we plan to implement ROS nodes for accomplishing these tasks. We plan to extend the mapping ability of the rover using Visual Inertial Odometry (VIO) using the cameras. In addition, we will explore the implementation of additional features such as autonomous target identification, waypoint marking, collision avoidance, and iterative trajectory optimization. The project will culminate in a series of demonstrations to showcase the autonomous navigation, and tele-operation abilities of the robot. Success will be evaluated based on ease of use by the tele-operator, collision avoidance ability, autonomous waypoint navigation accuracy, and robust map creation at high driving speeds.","PeriodicalId":331495,"journal":{"name":"2020 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Systems and Information Engineering Design Symposium (SIEDS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIEDS49339.2020.9106581","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The nexus of robotics, autonomous systems, and artificial intelligence (AI) has the potential to change the nature of human guided exploration of indoor and outdoor spaces. Such autonomous mobile robots can be incorporated into a variety of applications, ranging from logistics and maintenance, to intelligence gathering, surveillance, and reconnaissance (ISR). One such example is that of a tele-operator using the robot to generate a map of the inside of a building while discovering and tagging the objects of interest. During this process, the tele-operator can also assign an area for the robot to navigate autonomously or return to a previously marked area/object of interest. Search and rescue and ISR abilities could be immensely improved with such capabilities. The goal of this research is to prototype and demonstrate the above autonomous capabilities in a mobile ground robot called Explorer51. Objectives include: (i) enabling an operator to drive the robot non-line of sight to explore a space by incorporating a first-person view (FPV) system to stream data from the robot to the base station; (ii) implementing automatic collision avoidance to prevent the operator from running the robot into obstacles; (iii) creating and saving 2D and 3D maps of the space in real time by using a 2D laser scanner, tracking, and depth/RGB cameras; (iv) locating and tagging objects of interest as waypoints within the map; (v) autonomously navigate within the map to reach a chosen waypoint.To accomplish these goals, we are using the AION Robotics R1 Unmanned Ground Vehicle (UGV) rover as the platform for Explorer51 to demonstrate the autonomous features. The rover runs the Robot Operating System (ROS) onboard an NVIDIA Jetson TX2 board, connected to a Pixhawk controller. Sensors include a 2D scanning LiDAR, depth camera, tracking camera, and an IMU. Using existing ROS packages such as Cartographer and TEB planner, we plan to implement ROS nodes for accomplishing these tasks. We plan to extend the mapping ability of the rover using Visual Inertial Odometry (VIO) using the cameras. In addition, we will explore the implementation of additional features such as autonomous target identification, waypoint marking, collision avoidance, and iterative trajectory optimization. The project will culminate in a series of demonstrations to showcase the autonomous navigation, and tele-operation abilities of the robot. Success will be evaluated based on ease of use by the tele-operator, collision avoidance ability, autonomous waypoint navigation accuracy, and robust map creation at high driving speeds.
Explorer51 -室内测绘,发现和导航的自主移动机器人
机器人、自主系统和人工智能(AI)的结合有可能改变人类引导的室内和室外空间探索的本质。这种自主移动机器人可以被整合到各种应用中,从后勤和维护到情报收集、监视和侦察(ISR)。一个这样的例子是,一个远程操作员使用机器人生成一幅建筑物内部的地图,同时发现并标记感兴趣的物体。在此过程中,远程操作员还可以为机器人分配一个区域,让机器人自主导航或返回到先前标记的区域/感兴趣的物体。有了这些能力,搜索、救援和ISR能力将得到极大的提高。这项研究的目标是在一个名为Explorer51的移动地面机器人上建立原型并展示上述自主能力。目标包括:(i)通过结合第一人称视角(FPV)系统将数据从机器人传输到基站,使操作员能够驱动机器人的非视线探索空间;(ii)实施自动避碰,防止操作员驾驶机器人撞到障碍物;(iii)通过使用2D激光扫描仪、跟踪和深度/RGB相机实时创建和保存空间的2D和3D地图;(iv)将感兴趣的目标定位并标记为地图内的航路点;(v)在地图内自主导航到达选定的航路点。为了实现这些目标,我们正在使用AION Robotics R1无人地面车辆(UGV)漫游者作为Explorer51展示自主功能的平台。探测车运行机器人操作系统(ROS),搭载NVIDIA Jetson TX2板,连接到Pixhawk控制器。传感器包括一个二维扫描激光雷达、深度相机、跟踪相机和一个IMU。使用现有的ROS包(如Cartographer和TEB planner),我们计划实现ROS节点来完成这些任务。我们计划利用相机使用视觉惯性里程计(VIO)来扩展漫游车的测绘能力。此外,我们将探索其他功能的实现,如自主目标识别、航路点标记、避免碰撞和迭代轨迹优化。该项目将在一系列展示机器人自主导航和远程操作能力的演示中达到高潮。成功与否将根据远程操作员的易用性、避免碰撞的能力、自主航路点导航的准确性以及在高速行驶时稳健的地图创建来评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信