机器人应用中基于视觉的语义同步定位和映射:综述

O. Atoui, H. Husni, R. Mat
{"title":"机器人应用中基于视觉的语义同步定位和映射:综述","authors":"O. Atoui, H. Husni, R. Mat","doi":"10.1063/1.5121082","DOIUrl":null,"url":null,"abstract":"One of most important techniques that plays a key role in elevating a mobile robot’s independence is its ability to construct a map from an unknown surrounding in an unknown initial position, and with the use of onboard sensors, localize itself in this map. This technique is called simultaneous localization and mapping or SLAM. Over the last 30 years, numerous new and interesting inquiries have been raised, with the improvement of new techniques, new computational instruments, and new sensors. However, the big challenges facing mobile robots in the next decade, as in the autonomous urban vehicles, require extended representations that exceed traditional mapping found in classical SLAM systems, i.e. the so-called semantic representation. The main goal of a SLAM system with semantic concepts is to expand mobile robots’ services and strengthen human-robot interaction. Related works reviewed show that the visual-based SLAM or VSLAM has received a great deal of interest in the last decade. This is due to the visual sensors’ capability to provide information of the scene whereas they are low-priced, smaller and lighter than other sensors. Unlike the metric representation, semantic mapping is still immature, and it comes up short on durable formulation. This paper aims to systematically review recent researches related to the semantic VSLAM, including its types, approaches, and challenges. The paper also deals with the classical SLAM system by giving an overview of necessary information before getting into detail. This review also provides new researches in the SLAM domain facilities to further understand the anatomy of modern VSLAM generation, discover recent algorithms, and apprehend some open challenges.","PeriodicalId":325925,"journal":{"name":"THE 4TH INNOVATION AND ANALYTICS CONFERENCE & EXHIBITION (IACE 2019)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Visual-based semantic simultaneous localization and mapping for Robotic applications: A review\",\"authors\":\"O. Atoui, H. Husni, R. Mat\",\"doi\":\"10.1063/1.5121082\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One of most important techniques that plays a key role in elevating a mobile robot’s independence is its ability to construct a map from an unknown surrounding in an unknown initial position, and with the use of onboard sensors, localize itself in this map. This technique is called simultaneous localization and mapping or SLAM. Over the last 30 years, numerous new and interesting inquiries have been raised, with the improvement of new techniques, new computational instruments, and new sensors. However, the big challenges facing mobile robots in the next decade, as in the autonomous urban vehicles, require extended representations that exceed traditional mapping found in classical SLAM systems, i.e. the so-called semantic representation. The main goal of a SLAM system with semantic concepts is to expand mobile robots’ services and strengthen human-robot interaction. Related works reviewed show that the visual-based SLAM or VSLAM has received a great deal of interest in the last decade. This is due to the visual sensors’ capability to provide information of the scene whereas they are low-priced, smaller and lighter than other sensors. Unlike the metric representation, semantic mapping is still immature, and it comes up short on durable formulation. This paper aims to systematically review recent researches related to the semantic VSLAM, including its types, approaches, and challenges. The paper also deals with the classical SLAM system by giving an overview of necessary information before getting into detail. This review also provides new researches in the SLAM domain facilities to further understand the anatomy of modern VSLAM generation, discover recent algorithms, and apprehend some open challenges.\",\"PeriodicalId\":325925,\"journal\":{\"name\":\"THE 4TH INNOVATION AND ANALYTICS CONFERENCE & EXHIBITION (IACE 2019)\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"THE 4TH INNOVATION AND ANALYTICS CONFERENCE & EXHIBITION (IACE 2019)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1063/1.5121082\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"THE 4TH INNOVATION AND ANALYTICS CONFERENCE & EXHIBITION (IACE 2019)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1063/1.5121082","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在提高移动机器人的独立性中起关键作用的最重要的技术之一是它能够在未知的初始位置从未知的周围构建地图,并使用车载传感器在该地图中定位自己。这种技术被称为同时定位和映射或SLAM。在过去的30年里,随着新技术、新计算仪器和新传感器的改进,提出了许多新的和有趣的问题。然而,移动机器人在未来十年面临的巨大挑战,就像在自动城市车辆中一样,需要超越经典SLAM系统中传统映射的扩展表示,即所谓的语义表示。具有语义概念的SLAM系统的主要目标是扩展移动机器人的服务,加强人机交互。通过对相关工作的回顾,我们发现基于视觉的SLAM或VSLAM在近十年来得到了广泛的关注。这是由于视觉传感器能够提供场景信息,而它们比其他传感器价格低、体积小、重量轻。与度量表示不同,语义映射仍然不成熟,并且缺乏持久的表述。本文旨在系统地综述近年来语义VSLAM的研究进展,包括语义VSLAM的类型、方法和面临的挑战。本文还讨论了经典的SLAM系统,在进入细节之前对必要的信息进行了概述。这篇综述还提供了SLAM领域设施的新研究,以进一步了解现代VSLAM生成的解剖结构,发现最新的算法,并理解一些开放的挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Visual-based semantic simultaneous localization and mapping for Robotic applications: A review
One of most important techniques that plays a key role in elevating a mobile robot’s independence is its ability to construct a map from an unknown surrounding in an unknown initial position, and with the use of onboard sensors, localize itself in this map. This technique is called simultaneous localization and mapping or SLAM. Over the last 30 years, numerous new and interesting inquiries have been raised, with the improvement of new techniques, new computational instruments, and new sensors. However, the big challenges facing mobile robots in the next decade, as in the autonomous urban vehicles, require extended representations that exceed traditional mapping found in classical SLAM systems, i.e. the so-called semantic representation. The main goal of a SLAM system with semantic concepts is to expand mobile robots’ services and strengthen human-robot interaction. Related works reviewed show that the visual-based SLAM or VSLAM has received a great deal of interest in the last decade. This is due to the visual sensors’ capability to provide information of the scene whereas they are low-priced, smaller and lighter than other sensors. Unlike the metric representation, semantic mapping is still immature, and it comes up short on durable formulation. This paper aims to systematically review recent researches related to the semantic VSLAM, including its types, approaches, and challenges. The paper also deals with the classical SLAM system by giving an overview of necessary information before getting into detail. This review also provides new researches in the SLAM domain facilities to further understand the anatomy of modern VSLAM generation, discover recent algorithms, and apprehend some open challenges.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信