Navigation for autonomous robots in partially observable facilities

Henry I. Ibekwe, A. Kamrani
{"title":"Navigation for autonomous robots in partially observable facilities","authors":"Henry I. Ibekwe, A. Kamrani","doi":"10.1109/WAC.2014.6936134","DOIUrl":null,"url":null,"abstract":"Designing mobile robots that navigate indoor environments autonomously is known to be a difficult problem. A critical issue is in the formulation of robust motion control algorithms capable of reliably sensing partial or incomplete information from the environment and using this information to choose appropriate actions to achieve its designed goals. As an example, suppose we wish to deploy a mobile robot that autonomously patrols defined locations at a hazardous high-security facility. The robot must maintain accurate knowledge of its location, while using sensory data to recognize objects and obstacles in its immediate vicinity. Its task is to inspect the desired locations within a defined time period and provide real-time data in the event of an incident. The problem is thus to choose appropriate actions that result in accomplishing the patrol in a minimal amount of time in the partially structured environment. To solve this problem we adopt the Partially Observable Markov Decision Processes (POMDP) formalism to find near-optimal and efficient policies that provides a description the robot's motion in environments with incomplete state information. POMDP is a generalization of Markov Decision Processes (MDPs). It models a system as a coupling of an agent/decision maker (robots in our case) and an environment. We also present a methodology called Goal-Specific Representation (GSR) to reduce the size of the state-space for computational efficiency and propose an extension to the methodology.","PeriodicalId":196519,"journal":{"name":"2014 World Automation Congress (WAC)","volume":"154 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 World Automation Congress (WAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WAC.2014.6936134","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Designing mobile robots that navigate indoor environments autonomously is known to be a difficult problem. A critical issue is in the formulation of robust motion control algorithms capable of reliably sensing partial or incomplete information from the environment and using this information to choose appropriate actions to achieve its designed goals. As an example, suppose we wish to deploy a mobile robot that autonomously patrols defined locations at a hazardous high-security facility. The robot must maintain accurate knowledge of its location, while using sensory data to recognize objects and obstacles in its immediate vicinity. Its task is to inspect the desired locations within a defined time period and provide real-time data in the event of an incident. The problem is thus to choose appropriate actions that result in accomplishing the patrol in a minimal amount of time in the partially structured environment. To solve this problem we adopt the Partially Observable Markov Decision Processes (POMDP) formalism to find near-optimal and efficient policies that provides a description the robot's motion in environments with incomplete state information. POMDP is a generalization of Markov Decision Processes (MDPs). It models a system as a coupling of an agent/decision maker (robots in our case) and an environment. We also present a methodology called Goal-Specific Representation (GSR) to reduce the size of the state-space for computational efficiency and propose an extension to the methodology.
自主机器人在部分可观察设施中的导航
设计能够在室内环境中自主导航的移动机器人是一个难题。一个关键的问题是在制定鲁棒运动控制算法能够可靠地感知部分或不完整的信息来自环境,并使用这些信息来选择适当的行动,以实现其设计目标。例如,假设我们希望部署一个移动机器人,它可以自主地在一个危险的高安全性设施的指定位置巡逻。机器人必须保持对其位置的准确了解,同时使用感官数据识别其附近的物体和障碍物。它的任务是在规定的时间内检查所需的位置,并在发生事故时提供实时数据。因此,问题是在部分结构化的环境中选择适当的行动,从而在最短的时间内完成巡逻。为了解决这个问题,我们采用部分可观察马尔可夫决策过程(POMDP)的形式主义来寻找接近最优和有效的策略,这些策略提供了机器人在不完全状态信息环境中的运动描述。POMDP是马尔可夫决策过程(mdp)的一种推广。它将系统建模为代理/决策者(在我们的例子中是机器人)和环境的耦合。我们还提出了一种称为目标特定表示(GSR)的方法来减少状态空间的大小以提高计算效率,并提出了对该方法的扩展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信