{"title":"Representing the Plan Monitoring Needs and Resources of Robotic Systems","authors":"M. Schoppers","doi":"10.1109/AIHAS.1992.636884","DOIUrl":null,"url":null,"abstract":"Intelligent robotic systems must obtain information about the state of their environment; that is “sensing to support acting”. Conversely, the flexible use of a sensor may require set-up activity, or “acting t o support sensing”. This paper shows how modal logics of knowledge and belzef extend the expressiveness of declarative domain models, and support automated reasoning about the needs, capabilities, and interactions of sensor and eflector activities. This work is distinguished from previous work an symbolic AI planning b y I ) representing sensing actions that might unpredictably find a given environmental condition to be true or false; 2) using such sensing actions without distinguishing, at planning time, the outcomes possible at execution time (thus containing plan size); and 3) providing for the planning of activities based solely on expectations (e.g. when sensor information is unavailable). The representatzon has been used to synthesize a control and coordznation plan for ihe distributed subsystems of t h e space-faring NASA EVA Retriever robot. 1 Planning for plan monitoring The majority of previous AI approaches to plan execution have separated plan monitoring from plan coiistruction: the planner reasons as if actions are guaranteed to have their desired effects; decisions about how to monitor what, conditions are made after the plan has been completed. This means tha t the monitoring must be entirely passive, for as soon as an attempt to observe soinetahirig changes the state of the world or robot, there is a high probability t,hat the p1a.n itself has been invalidated. Given tha t sensing is necessary a.nd that the flexible use of a sensor will require effector activity (such as moving the sensor plat,fo‘orm relativr t,o the robot body, or moving the whole robot body), the sensory activities and their support,iiig effector activities ha.d better become p x t of t8he p h i , a.nd both had better be represented in such a. way t1ia.t t,lieir effects can be reasoned ahout during plaa coiist,ruction. The notation 1 present, in t,liis pa.per solves that problem by retaining the usual operator notation for actions, wliile using a iiiorlal logic of l;no~ledge and belief to eiilarge t,he doma.in description vocabulary wherever i t is used, whether in preconditions, postconditions, domain axioms or inference rules. Alt,hougli there have been severa.1 planners that used modal logic to reason about the knowledge requirements and knowledge effects of actions, none of those planners used knowledge-generating actions to check whether other actions had worked as desired. When Moore [9, pp.12lffI had a safe-opening action produce the knowledge that the safe was open, he did it not by performing a sensing action to find out at execution time whether the safe was open or closed, but by showing that if the safe were assumed open (or closed) then, after simulating the safe-opening action, the planner (not the executor) would “know” tha t the safe was open (or closed). Of course, the planner could equally well assume that the safe was still closed, and a,ctually had no reason to make either assumption. Similarly, Konolige [8] and Drummond [4, sec 5.121 both supposedly built plans for determining whether an oven’s pilot light is on, by doing the experiment of trying to light the burner; but their planners stop with two equally plausible outcomes (the burner lights, or doesn’t), withou t following through by including actions tha t could decide which outcome wa.s real. The same omission occurs in the work by Haas [6] and Morgenstern [lo]. No previous work on representing the knowledge preconditions and postconditions of actjions has seen the need to include actions whose execution would decide whether a condit,ion was true or false. Conversely, no previous work that included execution-time sensing actions (e.g. robots a.nd work on situated agents) has represented such actions in a. way tha.t supports semantically sound aut,oma.ted reasoning. ( T w o near misses are the plan represeiita.tions of Doyle et a.1 [3] and Gervasio [5].) This pa.per closes the gap, and thereby provides the first representation that, can support sound automated reasoning a.bouh actions that can sense the outcomes of ot,her a.ct,ions. Viewed a.s logical formalisms, t3he representations proposed by this pa.per are sti,a.iglitforwarcl (but apparently unobvious) adaptations of existling logics. However, this paper is riot only about the representation, it is also about my view that t,he a.ct,ion descriptions (operat,or sc1iema.s) supplied t,o a. planning syst,em have ,no more proinisory force t1ia.n do t,he lieuristics supplied to other knowledge-lxsed syst,eiiis. This view comes about as follows. In even a. mildly unpredictable world there is no honest way to “prove that a. p1a.n will work”: actions may not have t,lieir iiitended effects, a.nd a.chieved effects may not persist. long enough t,o be useful. Hence the best t1ia.t ca.n rea.lly be done is to prove t,liat a p1a.n can work if the world is coopera.tive enough. A a.ction 0-8186-2675-5/92 $3.00 Q 1992 IEEE 182 description or operator is merely a compact way of saying that zfone of the action’s effects is a goal and the action’s preconditions hold or can be achieved then the action “might turn out t o be useful.” Therefore, I break with the tradition ~ finally made explicit by Drummond [4, p.291 tha t the planner is entitled to assume that actions will work. Instead, to find out whether the world is behaving a,s desired, the plan executor must perform numerous sensing actions; for flexibility, those actions should be built into the plan; the information produced by those a.ctions must be allowed to show tha t the planner’s projections were wrong; and ideally, the planner should build the plan to be somewhat robust against the likelihood of incorrect projections. Thus we come to the reason why i t has been difficult for the majority of planning systems to include real sensing actions. The outcome of any sensing action worth having is unpredictable: maybe the safe will open, maybe it won’t. If one takes seriously the idea that a plan executor must sense the truth of at least s o m e conditions needed by the plan, and tha t each sensory test can come out either true or false, there is no escape from plans containing large numbers of conditional branches and loops, both of which are difficult for modern planners t o construct. This problem disappears completely in the context of the classification-based “reaction plans” now controlling some situated agents, because such plan structures are inherently highly conditional and iterative (like production systems) without actually containing any explicit conditionals or loops. Further, such highly conditional plans may be built automatically, as shown by the author’s Ph.D. thesis [12]. Therefore, the time is right t o represent real sensing actions (and other actions having nondeterministic outcomes), for presentation to an automatic planner, and for inclusion into automatically constructed reaction plans.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIHAS.1992.636884","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Intelligent robotic systems must obtain information about the state of their environment; that is “sensing to support acting”. Conversely, the flexible use of a sensor may require set-up activity, or “acting t o support sensing”. This paper shows how modal logics of knowledge and belzef extend the expressiveness of declarative domain models, and support automated reasoning about the needs, capabilities, and interactions of sensor and eflector activities. This work is distinguished from previous work an symbolic AI planning b y I ) representing sensing actions that might unpredictably find a given environmental condition to be true or false; 2) using such sensing actions without distinguishing, at planning time, the outcomes possible at execution time (thus containing plan size); and 3) providing for the planning of activities based solely on expectations (e.g. when sensor information is unavailable). The representatzon has been used to synthesize a control and coordznation plan for ihe distributed subsystems of t h e space-faring NASA EVA Retriever robot. 1 Planning for plan monitoring The majority of previous AI approaches to plan execution have separated plan monitoring from plan coiistruction: the planner reasons as if actions are guaranteed to have their desired effects; decisions about how to monitor what, conditions are made after the plan has been completed. This means tha t the monitoring must be entirely passive, for as soon as an attempt to observe soinetahirig changes the state of the world or robot, there is a high probability t,hat the p1a.n itself has been invalidated. Given tha t sensing is necessary a.nd that the flexible use of a sensor will require effector activity (such as moving the sensor plat,fo‘orm relativr t,o the robot body, or moving the whole robot body), the sensory activities and their support,iiig effector activities ha.d better become p x t of t8he p h i , a.nd both had better be represented in such a. way t1ia.t t,lieir effects can be reasoned ahout during plaa coiist,ruction. The notation 1 present, in t,liis pa.per solves that problem by retaining the usual operator notation for actions, wliile using a iiiorlal logic of l;no~ledge and belief to eiilarge t,he doma.in description vocabulary wherever i t is used, whether in preconditions, postconditions, domain axioms or inference rules. Alt,hougli there have been severa.1 planners that used modal logic to reason about the knowledge requirements and knowledge effects of actions, none of those planners used knowledge-generating actions to check whether other actions had worked as desired. When Moore [9, pp.12lffI had a safe-opening action produce the knowledge that the safe was open, he did it not by performing a sensing action to find out at execution time whether the safe was open or closed, but by showing that if the safe were assumed open (or closed) then, after simulating the safe-opening action, the planner (not the executor) would “know” tha t the safe was open (or closed). Of course, the planner could equally well assume that the safe was still closed, and a,ctually had no reason to make either assumption. Similarly, Konolige [8] and Drummond [4, sec 5.121 both supposedly built plans for determining whether an oven’s pilot light is on, by doing the experiment of trying to light the burner; but their planners stop with two equally plausible outcomes (the burner lights, or doesn’t), withou t following through by including actions tha t could decide which outcome wa.s real. The same omission occurs in the work by Haas [6] and Morgenstern [lo]. No previous work on representing the knowledge preconditions and postconditions of actjions has seen the need to include actions whose execution would decide whether a condit,ion was true or false. Conversely, no previous work that included execution-time sensing actions (e.g. robots a.nd work on situated agents) has represented such actions in a. way tha.t supports semantically sound aut,oma.ted reasoning. ( T w o near misses are the plan represeiita.tions of Doyle et a.1 [3] and Gervasio [5].) This pa.per closes the gap, and thereby provides the first representation that, can support sound automated reasoning a.bouh actions that can sense the outcomes of ot,her a.ct,ions. Viewed a.s logical formalisms, t3he representations proposed by this pa.per are sti,a.iglitforwarcl (but apparently unobvious) adaptations of existling logics. However, this paper is riot only about the representation, it is also about my view that t,he a.ct,ion descriptions (operat,or sc1iema.s) supplied t,o a. planning syst,em have ,no more proinisory force t1ia.n do t,he lieuristics supplied to other knowledge-lxsed syst,eiiis. This view comes about as follows. In even a. mildly unpredictable world there is no honest way to “prove that a. p1a.n will work”: actions may not have t,lieir iiitended effects, a.nd a.chieved effects may not persist. long enough t,o be useful. Hence the best t1ia.t ca.n rea.lly be done is to prove t,liat a p1a.n can work if the world is coopera.tive enough. A a.ction 0-8186-2675-5/92 $3.00 Q 1992 IEEE 182 description or operator is merely a compact way of saying that zfone of the action’s effects is a goal and the action’s preconditions hold or can be achieved then the action “might turn out t o be useful.” Therefore, I break with the tradition ~ finally made explicit by Drummond [4, p.291 tha t the planner is entitled to assume that actions will work. Instead, to find out whether the world is behaving a,s desired, the plan executor must perform numerous sensing actions; for flexibility, those actions should be built into the plan; the information produced by those a.ctions must be allowed to show tha t the planner’s projections were wrong; and ideally, the planner should build the plan to be somewhat robust against the likelihood of incorrect projections. Thus we come to the reason why i t has been difficult for the majority of planning systems to include real sensing actions. The outcome of any sensing action worth having is unpredictable: maybe the safe will open, maybe it won’t. If one takes seriously the idea that a plan executor must sense the truth of at least s o m e conditions needed by the plan, and tha t each sensory test can come out either true or false, there is no escape from plans containing large numbers of conditional branches and loops, both of which are difficult for modern planners t o construct. This problem disappears completely in the context of the classification-based “reaction plans” now controlling some situated agents, because such plan structures are inherently highly conditional and iterative (like production systems) without actually containing any explicit conditionals or loops. Further, such highly conditional plans may be built automatically, as shown by the author’s Ph.D. thesis [12]. Therefore, the time is right t o represent real sensing actions (and other actions having nondeterministic outcomes), for presentation to an automatic planner, and for inclusion into automatically constructed reaction plans.