{"title":"Navigation for autonomous robots in partially observable facilities","authors":"Henry I. Ibekwe, A. Kamrani","doi":"10.1109/WAC.2014.6936134","DOIUrl":null,"url":null,"abstract":"Designing mobile robots that navigate indoor environments autonomously is known to be a difficult problem. A critical issue is in the formulation of robust motion control algorithms capable of reliably sensing partial or incomplete information from the environment and using this information to choose appropriate actions to achieve its designed goals. As an example, suppose we wish to deploy a mobile robot that autonomously patrols defined locations at a hazardous high-security facility. The robot must maintain accurate knowledge of its location, while using sensory data to recognize objects and obstacles in its immediate vicinity. Its task is to inspect the desired locations within a defined time period and provide real-time data in the event of an incident. The problem is thus to choose appropriate actions that result in accomplishing the patrol in a minimal amount of time in the partially structured environment. To solve this problem we adopt the Partially Observable Markov Decision Processes (POMDP) formalism to find near-optimal and efficient policies that provides a description the robot's motion in environments with incomplete state information. POMDP is a generalization of Markov Decision Processes (MDPs). It models a system as a coupling of an agent/decision maker (robots in our case) and an environment. We also present a methodology called Goal-Specific Representation (GSR) to reduce the size of the state-space for computational efficiency and propose an extension to the methodology.","PeriodicalId":196519,"journal":{"name":"2014 World Automation Congress (WAC)","volume":"154 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 World Automation Congress (WAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WAC.2014.6936134","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Designing mobile robots that navigate indoor environments autonomously is known to be a difficult problem. A critical issue is in the formulation of robust motion control algorithms capable of reliably sensing partial or incomplete information from the environment and using this information to choose appropriate actions to achieve its designed goals. As an example, suppose we wish to deploy a mobile robot that autonomously patrols defined locations at a hazardous high-security facility. The robot must maintain accurate knowledge of its location, while using sensory data to recognize objects and obstacles in its immediate vicinity. Its task is to inspect the desired locations within a defined time period and provide real-time data in the event of an incident. The problem is thus to choose appropriate actions that result in accomplishing the patrol in a minimal amount of time in the partially structured environment. To solve this problem we adopt the Partially Observable Markov Decision Processes (POMDP) formalism to find near-optimal and efficient policies that provides a description the robot's motion in environments with incomplete state information. POMDP is a generalization of Markov Decision Processes (MDPs). It models a system as a coupling of an agent/decision maker (robots in our case) and an environment. We also present a methodology called Goal-Specific Representation (GSR) to reduce the size of the state-space for computational efficiency and propose an extension to the methodology.