Kaviya Dhanabalachandran, Vanessa Hassouna, Maria M. Hedblom, Michaela Küempel, Nils Leusmann, M. Beetz
{"title":"切割事件:通过图像-示意图事件分割实现机器人代理的自主计划自适应","authors":"Kaviya Dhanabalachandran, Vanessa Hassouna, Maria M. Hedblom, Michaela Küempel, Nils Leusmann, M. Beetz","doi":"10.1145/3460210.3493585","DOIUrl":null,"url":null,"abstract":"Autonomous robots struggle with plan adaption in uncertain and changing environments. Although modern robots can make popcorn and pancakes, they are incapable of performing such tasks in unknown settings and unable to adapt action plans if ingredients or tools are missing. Humans are continuously aware of their surroundings. For robotic agents, real-time state updating is time-consuming and other methods for failure handling are required. Taking inspiration from human cognition, we propose a plan adaption method based on event segmentation of the image-schematic states of subtasks within action descriptors. For this, we reuse action plans of the robotic architecture CRAM and ontologically model the involved objects and image-schematic states of the action descriptor cutting. Our evaluation uses a robot simulation of the task of cutting bread and demonstrates that the system can reason about possible solutions to unexpected failures regarding tool use.","PeriodicalId":377331,"journal":{"name":"Proceedings of the 11th on Knowledge Capture Conference","volume":"292 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Cutting Events: Towards Autonomous Plan Adaption by Robotic Agents through Image-Schematic Event Segmentation\",\"authors\":\"Kaviya Dhanabalachandran, Vanessa Hassouna, Maria M. Hedblom, Michaela Küempel, Nils Leusmann, M. Beetz\",\"doi\":\"10.1145/3460210.3493585\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Autonomous robots struggle with plan adaption in uncertain and changing environments. Although modern robots can make popcorn and pancakes, they are incapable of performing such tasks in unknown settings and unable to adapt action plans if ingredients or tools are missing. Humans are continuously aware of their surroundings. For robotic agents, real-time state updating is time-consuming and other methods for failure handling are required. Taking inspiration from human cognition, we propose a plan adaption method based on event segmentation of the image-schematic states of subtasks within action descriptors. For this, we reuse action plans of the robotic architecture CRAM and ontologically model the involved objects and image-schematic states of the action descriptor cutting. Our evaluation uses a robot simulation of the task of cutting bread and demonstrates that the system can reason about possible solutions to unexpected failures regarding tool use.\",\"PeriodicalId\":377331,\"journal\":{\"name\":\"Proceedings of the 11th on Knowledge Capture Conference\",\"volume\":\"292 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 11th on Knowledge Capture Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3460210.3493585\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 11th on Knowledge Capture Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3460210.3493585","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cutting Events: Towards Autonomous Plan Adaption by Robotic Agents through Image-Schematic Event Segmentation
Autonomous robots struggle with plan adaption in uncertain and changing environments. Although modern robots can make popcorn and pancakes, they are incapable of performing such tasks in unknown settings and unable to adapt action plans if ingredients or tools are missing. Humans are continuously aware of their surroundings. For robotic agents, real-time state updating is time-consuming and other methods for failure handling are required. Taking inspiration from human cognition, we propose a plan adaption method based on event segmentation of the image-schematic states of subtasks within action descriptors. For this, we reuse action plans of the robotic architecture CRAM and ontologically model the involved objects and image-schematic states of the action descriptor cutting. Our evaluation uses a robot simulation of the task of cutting bread and demonstrates that the system can reason about possible solutions to unexpected failures regarding tool use.