{"title":"Flexible Task-Specific Control Using Active Vision","authors":"R. Firby, M. Swain","doi":"10.1109/AIHAS.1992.636877","DOIUrl":null,"url":null,"abstract":"This paper is about the interface between continuous and discrete robot control. We advocate encapsulating continuous actions and their related sensing strategies into behaviors called situation specific activities, which can be constructed by a symbolic reactive planner. Task- specific, real-time perception is a fundamental part of these activities. While researchers have successfully used primitive touch and sonar sensors in such situations, it is more problematic to achieve reasonable performance with complex signals such as those from a video camera. Active vision routines are suggested as a means of incorporating visual data into real time control and as one mechanism for designating aspects of the world in an indexical-functional manner. Active vision routines are a particularly flexible sensing methodology because different routines extract different functional attributes from the world using the same sensor. In fact, there will often be different active vision routines for extracting the same functional attribute using different processing techniques. This allows an agent substantial leeway to instantiate its activities in different ways under different circumstances using different active vision routines. We demonstrate the utility of this architecture with an object tracking example. A control system is presented that can be reconfigured by a reactive planner to achieve different tasks. We show how this system allows us to build interchangeable tracking activities that use either color histogram or motion based active vision routines.","PeriodicalId":442147,"journal":{"name":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIHAS.1992.636877","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This paper is about the interface between continuous and discrete robot control. We advocate encapsulating continuous actions and their related sensing strategies into behaviors called situation specific activities, which can be constructed by a symbolic reactive planner. Task- specific, real-time perception is a fundamental part of these activities. While researchers have successfully used primitive touch and sonar sensors in such situations, it is more problematic to achieve reasonable performance with complex signals such as those from a video camera. Active vision routines are suggested as a means of incorporating visual data into real time control and as one mechanism for designating aspects of the world in an indexical-functional manner. Active vision routines are a particularly flexible sensing methodology because different routines extract different functional attributes from the world using the same sensor. In fact, there will often be different active vision routines for extracting the same functional attribute using different processing techniques. This allows an agent substantial leeway to instantiate its activities in different ways under different circumstances using different active vision routines. We demonstrate the utility of this architecture with an object tracking example. A control system is presented that can be reconfigured by a reactive planner to achieve different tasks. We show how this system allows us to build interchangeable tracking activities that use either color histogram or motion based active vision routines.