L. Mauro, Francesco Puja, S. Grazioso, Valsamis Ntouskos, Marta Sanzari, Edoardo Alati, L. Freda, F. Pirri
{"title":"Visual Search and Recognition for Robot Task Execution and Monitoring","authors":"L. Mauro, Francesco Puja, S. Grazioso, Valsamis Ntouskos, Marta Sanzari, Edoardo Alati, L. Freda, F. Pirri","doi":"10.3233/978-1-61499-929-4-94","DOIUrl":null,"url":null,"abstract":"Visual search of relevant targets in the environment is a crucial robot skill. We propose a preliminary framework for the execution monitor of a robot task, taking care of the robot attitude to visually searching the environment for targets involved in the task. Visual search is also relevant to recover from a failure. The framework exploits deep reinforcement learning to acquire a \"common sense\" scene structure and it takes advantage of a deep convolutional network to detect objects and relevant relations holding between them. The framework builds on these methods to introduce a vision-based execution monitoring, which uses classical planning as a backbone for task execution. Experiments show that with the proposed vision-based execution monitor the robot can complete simple tasks and can recover from failures in autonomy.","PeriodicalId":276901,"journal":{"name":"Applications of Intelligent Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applications of Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/978-1-61499-929-4-94","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Visual search of relevant targets in the environment is a crucial robot skill. We propose a preliminary framework for the execution monitor of a robot task, taking care of the robot attitude to visually searching the environment for targets involved in the task. Visual search is also relevant to recover from a failure. The framework exploits deep reinforcement learning to acquire a "common sense" scene structure and it takes advantage of a deep convolutional network to detect objects and relevant relations holding between them. The framework builds on these methods to introduce a vision-based execution monitoring, which uses classical planning as a backbone for task execution. Experiments show that with the proposed vision-based execution monitor the robot can complete simple tasks and can recover from failures in autonomy.