{"title":"Issues, Architectures and Techniques in Real-Time Vision","authors":"J. Cooper, L. Kitchen","doi":"10.1109/AIHAS.1992.636889","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636889","url":null,"abstract":"We discuss some issues in real-time vision for robots, presenting an integrated collection of techniques, namely model-based prediction, speculative computing, foveation, and multiresolution processing. We describe an asynchronous, parallel distributed architecture of autonomous agents to support these techniques, in which the agents can be regarded as expert translators between languages at different levels of representation. These techniques have been implemented in several partial, prototype systems, which demonstrate quite impressive real-time performance using only limited computing resources.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130543058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Abstracting and Explaining Simulation Model Behaviour","authors":"L. Travers, S. Sevinc","doi":"10.1109/AIHAS.1992.636880","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636880","url":null,"abstract":"In this article we propose a number of approaches that could be used to extract information from simulations of a system, summarising its behaviour. This information could be extracted to provide a general knowledge base for the system or could be provided in response to specific queries by a user. Three techniques are discussed in detail.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126104733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Integrated Method for Planning Smooth Collision-Free Trajectories for Robot Arms","authors":"Jianwei Zhang","doi":"10.1109/AIHAS.1992.636861","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636861","url":null,"abstract":"A method for planning motions of robot arms is presented which considers not only the collision-avoidance but also robot's dynamic aspects. Based on the concept of configuration space, robot motions are planned at the topological and geometric level consecutively. At the topological level, the boundaries of the Configurationspace obstacles (C-obstacles) are computed and approximately represented while the complement of the Cobstacles, i.e. the free-space is divided into Empty Blocks {EBs). Linking all the connected EBs, we get a net called characteristic net. Given the initial and goal configuration of the robot, routes consisting of EBs (EB-routes) are searched in the characteristic net. At the geometric level, smooth trajectories are generated along EB-routes using Bsplines while avoiding collisions with local C-obstacles.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114259965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Functional/Declarative Dichotomy for Characterizing Simulation Models","authors":"P. Fishwick","doi":"10.1109/AIHAS.1992.636871","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636871","url":null,"abstract":"Traditional computer simulation terminology includes taxonomic divisions with terms such as “discrete event,” “continuous,” and ‘$‘process oriented.” Even though such terms have become familiar to simulation researchers, the terminology is distinct from other disciplines -such as artificial intelligence and software enganeering- which have similar goals to our own relating specifically to modelling dynamic systems. We present a perspective that serves to characterize simulation models in terms of their procedural versus declarative orientations. In teaching simulation students using this perspective, we have had success in relating the field of modelling within computer simulation to other sub-disciplines within computer science.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133108582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Modular Agent/deliverable Modality for Mobile Robot Development","authors":"R. Albrecht","doi":"10.1109/AIHAS.1992.636893","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636893","url":null,"abstract":"An agent/deliverable modality has been formulated for the purpose of enabling the development of mobile robots in an environment where the personnel consists of a large number of neophytes (undergraduate and beginning graduate students) together with a small number of dedicated but transient researchers (MS and PhD) thesis students) and a still smaller number of perananent researchers (faculty). The objective is to create a structured work environment in which projects that are implemented by neophytes can be easily and systematically integrated into an ongoing broader program of mobile robot development. In addition, the environment must provide a learning experience for the neophytes.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130855954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Representing the Plan Monitoring Needs and Resources of Robotic Systems","authors":"M. Schoppers","doi":"10.1109/AIHAS.1992.636884","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636884","url":null,"abstract":"Intelligent robotic systems must obtain information about the state of their environment; that is “sensing to support acting”. Conversely, the flexible use of a sensor may require set-up activity, or “acting t o support sensing”. This paper shows how modal logics of knowledge and belzef extend the expressiveness of declarative domain models, and support automated reasoning about the needs, capabilities, and interactions of sensor and eflector activities. This work is distinguished from previous work an symbolic AI planning b y I ) representing sensing actions that might unpredictably find a given environmental condition to be true or false; 2) using such sensing actions without distinguishing, at planning time, the outcomes possible at execution time (thus containing plan size); and 3) providing for the planning of activities based solely on expectations (e.g. when sensor information is unavailable). The representatzon has been used to synthesize a control and coordznation plan for ihe distributed subsystems of t h e space-faring NASA EVA Retriever robot. 1 Planning for plan monitoring The majority of previous AI approaches to plan execution have separated plan monitoring from plan coiistruction: the planner reasons as if actions are guaranteed to have their desired effects; decisions about how to monitor what, conditions are made after the plan has been completed. This means tha t the monitoring must be entirely passive, for as soon as an attempt to observe soinetahirig changes the state of the world or robot, there is a high probability t,hat the p1a.n itself has been invalidated. Given tha t sensing is necessary a.nd that the flexible use of a sensor will require effector activity (such as moving the sensor plat,fo‘orm relativr t,o the robot body, or moving the whole robot body), the sensory activities and their support,iiig effector activities ha.d better become p x t of t8he p h i , a.nd both had better be represented in such a. way t1ia.t t,lieir effects can be reasoned ahout during plaa coiist,ruction. The notation 1 present, in t,liis pa.per solves that problem by retaining the usual operator notation for actions, wliile using a iiiorlal logic of l;no~ledge and belief to eiilarge t,he doma.in description vocabulary wherever i t is used, whether in preconditions, postconditions, domain axioms or inference rules. Alt,hougli there have been severa.1 planners that used modal logic to reason about the knowledge requirements and knowledge effects of actions, none of those planners used knowledge-generating actions to check whether other actions had worked as desired. When Moore [9, pp.12lffI had a safe-opening action produce the knowledge that the safe was open, he did it not by performing a sensing action to find out at execution time whether the safe was open or closed, but by showing that if the safe were assumed open (or closed) then, after simulating the safe-opening action, the planner (not the executor) would “know” tha t the safe was op","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114274326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Surface Following and Modelling for Planar N-Link Manipulator Arms Equipped with Proximity Sensors","authors":"C. Pudney","doi":"10.1109/AIHAS.1992.636862","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636862","url":null,"abstract":"This paper presents a surface following algorithm and a surface modelling technique for planar n-link articulated manipulator arms equipped with proximity sensors. The surface following algorithm moves the arm along the surface of an object i n such a way that the arm remains within sensing range of the surface but does not collide with it. The surface model is constructed iteratively f rom sensor data obtained during this motion, and is used by the surface following algorithm t o compute the surface following arm motions.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115663210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reactive Planning with Uncertainty of a Plan","authors":"Seiji Yamada","doi":"10.1109/AIHAS.1992.636887","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636887","url":null,"abstract":"An autonomous agent in the real world needs to do planning depending on the dynamic world. For making a planning system adaptive to the dynamic world, various methods on reactive planning have been proposed. Interleaving planning with execution is a general and natural approach. However, it has a significant problem: when planning should be switched to execution, and few solutions has been proposed. In this paper, we propose a theoretical framework in which the switching timing is determined with the success probability. The success probability represents uncertainty in the execution of a plan, and depends on a change of the real world. Furthermore, since the success probability is obtained based on causality between literals, it is smciently precise. In our method, planning is switched to executionlobservation only when the success probability decreases less than thresholds. Since the success probability is used as an evaluation function,for searching a plan, we present a new interleave planning algorithm with the success probability, and different experimental results from conventional planning are obtained.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126562427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Flexible Task-Specific Control Using Active Vision","authors":"R. Firby, M. Swain","doi":"10.1109/AIHAS.1992.636877","DOIUrl":"https://doi.org/10.1109/AIHAS.1992.636877","url":null,"abstract":"This paper is about the interface between continuous and discrete robot control. We advocate encapsulating continuous actions and their related sensing strategies into behaviors called situation specific activities, which can be constructed by a symbolic reactive planner. Task- specific, real-time perception is a fundamental part of these activities. While researchers have successfully used primitive touch and sonar sensors in such situations, it is more problematic to achieve reasonable performance with complex signals such as those from a video camera. Active vision routines are suggested as a means of incorporating visual data into real time control and as one mechanism for designating aspects of the world in an indexical-functional manner. Active vision routines are a particularly flexible sensing methodology because different routines extract different functional attributes from the world using the same sensor. In fact, there will often be different active vision routines for extracting the same functional attribute using different processing techniques. This allows an agent substantial leeway to instantiate its activities in different ways under different circumstances using different active vision routines. We demonstrate the utility of this architecture with an object tracking example. A control system is presented that can be reconfigured by a reactive planner to achieve different tasks. We show how this system allows us to build interchangeable tracking activities that use either color histogram or motion based active vision routines.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128911637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}