Representing the Plan Monitoring Needs and Resources of Robotic Systems

M. Schoppers
{"title":"Representing the Plan Monitoring Needs and Resources of Robotic Systems","authors":"M. Schoppers","doi":"10.1109/AIHAS.1992.636884","DOIUrl":null,"url":null,"abstract":"Intelligent robotic systems must obtain information about the state of their environment; that is “sensing to support acting”. Conversely, the flexible use of a sensor may require set-up activity, or “acting t o support sensing”. This paper shows how modal logics of knowledge and belzef extend the expressiveness of declarative domain models, and support automated reasoning about the needs, capabilities, and interactions of sensor and eflector activities. This work is distinguished from previous work an symbolic AI planning b y I ) representing sensing actions that might unpredictably find a given environmental condition to be true or false; 2) using such sensing actions without distinguishing, at planning time, the outcomes possible at execution time (thus containing plan size); and 3) providing for the planning of activities based solely on expectations (e.g. when sensor information is unavailable). The representatzon has been used to synthesize a control and coordznation plan for ihe distributed subsystems of t h e space-faring NASA EVA Retriever robot. 1 Planning for plan monitoring The majority of previous AI approaches to plan execution have separated plan monitoring from plan coiistruction: the planner reasons as if actions are guaranteed to have their desired effects; decisions about how to monitor what, conditions are made after the plan has been completed. This means tha t the monitoring must be entirely passive, for as soon as an attempt to observe soinetahirig changes the state of the world or robot, there is a high probability t,hat the p1a.n itself has been invalidated. Given tha t sensing is necessary a.nd that the flexible use of a sensor will require effector activity (such as moving the sensor plat,fo‘orm relativr t,o the robot body, or moving the whole robot body), the sensory activities and their support,iiig effector activities ha.d better become p x t of t8he p h i , a.nd both had better be represented in such a. way t1ia.t t,lieir effects can be reasoned ahout during plaa coiist,ruction. The notation 1 present, in t,liis pa.per solves that problem by retaining the usual operator notation for actions, wliile using a iiiorlal logic of l;no~ledge and belief to eiilarge t,he doma.in description vocabulary wherever i t is used, whether in preconditions, postconditions, domain axioms or inference rules. Alt,hougli there have been severa.1 planners that used modal logic to reason about the knowledge requirements and knowledge effects of actions, none of those planners used knowledge-generating actions to check whether other actions had worked as desired. When Moore [9, pp.12lffI had a safe-opening action produce the knowledge that the safe was open, he did it not by performing a sensing action to find out at execution time whether the safe was open or closed, but by showing that if the safe were assumed open (or closed) then, after simulating the safe-opening action, the planner (not the executor) would “know” tha t the safe was open (or closed). Of course, the planner could equally well assume that the safe was still closed, and a,ctually had no reason to make either assumption. Similarly, Konolige [8] and Drummond [4, sec 5.121 both supposedly built plans for determining whether an oven’s pilot light is on, by doing the experiment of trying to light the burner; but their planners stop with two equally plausible outcomes (the burner lights, or doesn’t), withou t following through by including actions tha t could decide which outcome wa.s real. The same omission occurs in the work by Haas [6] and Morgenstern [lo]. No previous work on representing the knowledge preconditions and postconditions of actjions has seen the need to include actions whose execution would decide whether a condit,ion was true or false. Conversely, no previous work that included execution-time sensing actions (e.g. robots a.nd work on situated agents) has represented such actions in a. way tha.t supports semantically sound aut,oma.ted reasoning. ( T w o near misses are the plan represeiita.tions of Doyle et a.1 [3] and Gervasio [5].) This pa.per closes the gap, and thereby provides the first representation that, can support sound automated reasoning a.bouh actions that can sense the outcomes of ot,her a.ct,ions. Viewed a.s logical formalisms, t3he representations proposed by this pa.per are sti,a.iglitforwarcl (but apparently unobvious) adaptations of existling logics. However, this paper is riot only about the representation, it is also about my view that t,he a.ct,ion descriptions (operat,or sc1iema.s) supplied t,o a. planning syst,em have ,no more proinisory force t1ia.n do t,he lieuristics supplied to other knowledge-lxsed syst,eiiis. This view comes about as follows. In even a. mildly unpredictable world there is no honest way to “prove that a. p1a.n will work”: actions may not have t,lieir iiitended effects, a.nd a.chieved effects may not persist. long enough t,o be useful. Hence the best t1ia.t ca.n rea.lly be done is to prove t,liat a p1a.n can work if the world is coopera.tive enough. A a.ction 0-8186-2675-5/92 $3.00 Q 1992 IEEE 182 description or operator is merely a compact way of saying that zfone of the action’s effects is a goal and the action’s preconditions hold or can be achieved then the action “might turn out t o be useful.” Therefore, I break with the tradition ~ finally made explicit by Drummond [4, p.291 tha t the planner is entitled to assume that actions will work. Instead, to find out whether the world is behaving a,s desired, the plan executor must perform numerous sensing actions; for flexibility, those actions should be built into the plan; the information produced by those a.ctions must be allowed to show tha t the planner’s projections were wrong; and ideally, the planner should build the plan to be somewhat robust against the likelihood of incorrect projections. Thus we come to the reason why i t has been difficult for the majority of planning systems to include real sensing actions. The outcome of any sensing action worth having is unpredictable: maybe the safe will open, maybe it won’t. If one takes seriously the idea that a plan executor must sense the truth of at least s o m e conditions needed by the plan, and tha t each sensory test can come out either true or false, there is no escape from plans containing large numbers of conditional branches and loops, both of which are difficult for modern planners t o construct. This problem disappears completely in the context of the classification-based “reaction plans” now controlling some situated agents, because such plan structures are inherently highly conditional and iterative (like production systems) without actually containing any explicit conditionals or loops. Further, such highly conditional plans may be built automatically, as shown by the author’s Ph.D. thesis [12]. Therefore, the time is right t o represent real sensing actions (and other actions having nondeterministic outcomes), for presentation to an automatic planner, and for inclusion into automatically constructed reaction plans.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIHAS.1992.636884","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Intelligent robotic systems must obtain information about the state of their environment; that is “sensing to support acting”. Conversely, the flexible use of a sensor may require set-up activity, or “acting t o support sensing”. This paper shows how modal logics of knowledge and belzef extend the expressiveness of declarative domain models, and support automated reasoning about the needs, capabilities, and interactions of sensor and eflector activities. This work is distinguished from previous work an symbolic AI planning b y I ) representing sensing actions that might unpredictably find a given environmental condition to be true or false; 2) using such sensing actions without distinguishing, at planning time, the outcomes possible at execution time (thus containing plan size); and 3) providing for the planning of activities based solely on expectations (e.g. when sensor information is unavailable). The representatzon has been used to synthesize a control and coordznation plan for ihe distributed subsystems of t h e space-faring NASA EVA Retriever robot. 1 Planning for plan monitoring The majority of previous AI approaches to plan execution have separated plan monitoring from plan coiistruction: the planner reasons as if actions are guaranteed to have their desired effects; decisions about how to monitor what, conditions are made after the plan has been completed. This means tha t the monitoring must be entirely passive, for as soon as an attempt to observe soinetahirig changes the state of the world or robot, there is a high probability t,hat the p1a.n itself has been invalidated. Given tha t sensing is necessary a.nd that the flexible use of a sensor will require effector activity (such as moving the sensor plat,fo‘orm relativr t,o the robot body, or moving the whole robot body), the sensory activities and their support,iiig effector activities ha.d better become p x t of t8he p h i , a.nd both had better be represented in such a. way t1ia.t t,lieir effects can be reasoned ahout during plaa coiist,ruction. The notation 1 present, in t,liis pa.per solves that problem by retaining the usual operator notation for actions, wliile using a iiiorlal logic of l;no~ledge and belief to eiilarge t,he doma.in description vocabulary wherever i t is used, whether in preconditions, postconditions, domain axioms or inference rules. Alt,hougli there have been severa.1 planners that used modal logic to reason about the knowledge requirements and knowledge effects of actions, none of those planners used knowledge-generating actions to check whether other actions had worked as desired. When Moore [9, pp.12lffI had a safe-opening action produce the knowledge that the safe was open, he did it not by performing a sensing action to find out at execution time whether the safe was open or closed, but by showing that if the safe were assumed open (or closed) then, after simulating the safe-opening action, the planner (not the executor) would “know” tha t the safe was open (or closed). Of course, the planner could equally well assume that the safe was still closed, and a,ctually had no reason to make either assumption. Similarly, Konolige [8] and Drummond [4, sec 5.121 both supposedly built plans for determining whether an oven’s pilot light is on, by doing the experiment of trying to light the burner; but their planners stop with two equally plausible outcomes (the burner lights, or doesn’t), withou t following through by including actions tha t could decide which outcome wa.s real. The same omission occurs in the work by Haas [6] and Morgenstern [lo]. No previous work on representing the knowledge preconditions and postconditions of actjions has seen the need to include actions whose execution would decide whether a condit,ion was true or false. Conversely, no previous work that included execution-time sensing actions (e.g. robots a.nd work on situated agents) has represented such actions in a. way tha.t supports semantically sound aut,oma.ted reasoning. ( T w o near misses are the plan represeiita.tions of Doyle et a.1 [3] and Gervasio [5].) This pa.per closes the gap, and thereby provides the first representation that, can support sound automated reasoning a.bouh actions that can sense the outcomes of ot,her a.ct,ions. Viewed a.s logical formalisms, t3he representations proposed by this pa.per are sti,a.iglitforwarcl (but apparently unobvious) adaptations of existling logics. However, this paper is riot only about the representation, it is also about my view that t,he a.ct,ion descriptions (operat,or sc1iema.s) supplied t,o a. planning syst,em have ,no more proinisory force t1ia.n do t,he lieuristics supplied to other knowledge-lxsed syst,eiiis. This view comes about as follows. In even a. mildly unpredictable world there is no honest way to “prove that a. p1a.n will work”: actions may not have t,lieir iiitended effects, a.nd a.chieved effects may not persist. long enough t,o be useful. Hence the best t1ia.t ca.n rea.lly be done is to prove t,liat a p1a.n can work if the world is coopera.tive enough. A a.ction 0-8186-2675-5/92 $3.00 Q 1992 IEEE 182 description or operator is merely a compact way of saying that zfone of the action’s effects is a goal and the action’s preconditions hold or can be achieved then the action “might turn out t o be useful.” Therefore, I break with the tradition ~ finally made explicit by Drummond [4, p.291 tha t the planner is entitled to assume that actions will work. Instead, to find out whether the world is behaving a,s desired, the plan executor must perform numerous sensing actions; for flexibility, those actions should be built into the plan; the information produced by those a.ctions must be allowed to show tha t the planner’s projections were wrong; and ideally, the planner should build the plan to be somewhat robust against the likelihood of incorrect projections. Thus we come to the reason why i t has been difficult for the majority of planning systems to include real sensing actions. The outcome of any sensing action worth having is unpredictable: maybe the safe will open, maybe it won’t. If one takes seriously the idea that a plan executor must sense the truth of at least s o m e conditions needed by the plan, and tha t each sensory test can come out either true or false, there is no escape from plans containing large numbers of conditional branches and loops, both of which are difficult for modern planners t o construct. This problem disappears completely in the context of the classification-based “reaction plans” now controlling some situated agents, because such plan structures are inherently highly conditional and iterative (like production systems) without actually containing any explicit conditionals or loops. Further, such highly conditional plans may be built automatically, as shown by the author’s Ph.D. thesis [12]. Therefore, the time is right t o represent real sensing actions (and other actions having nondeterministic outcomes), for presentation to an automatic planner, and for inclusion into automatically constructed reaction plans.
代表机器人系统的计划、监控需求和资源
智能机器人系统必须获取有关其环境状态的信息;这就是“感觉支持行动”。相反,传感器的灵活使用可能需要设置活动,或“采取行动以支持传感”。本文展示了知识和belzef的模态逻辑如何扩展声明性领域模型的表达性,并支持关于传感器和反射器活动的需求、功能和交互的自动推理。这项工作与之前的象征性人工智能计划工作不同,它代表了可能不可预测地发现给定环境条件是真还是假的感知行为;2)在计划时使用这种感知行为,而不区分执行时可能出现的结果(从而包含计划规模);3)提供仅基于期望的活动规划(例如,当传感器信息不可用时)。利用该表示法对NASA EVA回收机器人的分布式子系统进行了控制与协调综合。之前大多数计划执行的AI方法都将计划监控与计划构建分离开来:计划者的理由似乎是行动保证会产生预期的效果;关于如何监控哪些条件的决定是在计划完成后做出的。这意味着监控必须完全是被动的,因为一旦观察某个物体的尝试改变了世界或机器人的状态,那么很有可能会发生这种情况。N本身是无效的。考虑到传感是必要的,并且传感器的灵活使用将需要效应器活动(例如移动传感器平台,使其相对于机器人身体,或移动整个机器人身体),感官活动及其支持,效应器活动D最好变成pxt (p / h)两者都可以这样表示。然而,其影响可以在计划的冲突和建设中进行推理。1在t中表示的符号是。Per解决了这个问题,它保留了常用的操作符符号,同时使用了l;无知识和信念的逻辑来描述t,即域。在任何使用它的描述词汇中,无论是在前提条件,后置条件,领域公理还是推理规则中。当然,已经有好几次了。在使用模态逻辑来推理知识需求和行动的知识效果的计划者中,没有一个计划者使用知识生成行动来检查其他行动是否按预期工作。当Moore [9, pp.12lffI]让一个保险柜打开动作产生保险柜打开的知识时,他不是通过执行一个感知动作来发现保险柜是打开还是关闭,而是通过展示如果保险柜被假设打开(或关闭),那么在模拟保险柜打开动作之后,计划者(而不是执行者)将“知道”保险柜是打开(或关闭)的。当然,计划者同样可以假设保险箱仍然是关着的,而实际上他没有理由做出这两种假设。类似地,Konolige[8]和Drummond [4, sec 5.121]都被认为是通过做试图点燃燃烧器的实验来确定烤箱的指示灯是否亮着的计划;但是他们的计划者止步于两种同样可信的结果(燃烧器亮起或不亮起),而没有包括可能决定哪种结果的行动。年代真实的。同样的遗漏也出现在Haas bbb和Morgenstern的研究中[10]。在表示动作的知识前置条件和后置条件方面,以前没有研究发现有必要包括那些执行决定条件是真还是假的动作。相反,以前的工作没有包括执行时间感知动作(例如机器人和位置代理的工作)以一种方式表示这样的动作。它支持语义上的声音识别。泰德推理。(该计划所代表的是两次险些失败。(Doyle et a.1[3]和Gervasio[3])。这个爸爸。Per弥补了这一差距,从而提供了第一个能够支持健全的自动推理的表示,即能够感知其行为的结果的行为。从逻辑形式的角度来看,本文提出的这些表示。每个人都是如此。略为(但显然不明显)对现有逻辑的改编。然而,这篇论文不仅仅是关于表象的,它也是关于我的观点,它,行为,描述(操作,或科学)提供给一个计划系统,似乎没有更多的临时力量。如果不这样做,则提供给其他以知识为基础的系统的文献就不一样了。这种观点是这样产生的。即使在一个稍微不可预测的世界里,也没有诚实的方法来“证明”a。“N将工作”:行动可能没有,没有预期的效果,并且已经取得的效果可能不会持续。时间足够长,可以派上用场。因此是最好的。这是真的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信