{"title":"基于人机交互的一次性学习自定义目标识别与分割","authors":"Ping Guo, Lidan Zhang, Lu Cao, Yingzhe Shen, Xuesong Shi, Haibing Ren, Yimin Zhang","doi":"10.1109/ICRA.2019.8793845","DOIUrl":null,"url":null,"abstract":"There are two difficulties to utilize state-of-the-art object recognition/detection/segmentation methods to robotic applications. First, most of the deep learning models heavily depend on large amounts of labeled training data, which are expensive to obtain for each individual application. Second, the object categories must be pre-defined in the dataset, thus not practical to scenarios with varying object categories. To alleviate the reliance on pre-defined big data, this paper proposes a customized object recognition and segmentation method. It aims to recognize and segment any object defined by the user, given only one annotation. There are three steps in the proposed method. First, the user takes an exemplar video of the target object with the robot, defines its name, and mask its boundary on only one frame. Then the robot automatically propagates the annotation through the exemplar video based on a proposed data generation method. In the meantime, a segmentation model continuously updates itself on the generated data. Finally, only a lightweight segmentation net is required at testing stage, to recognize and segment the user-defined object in any scenes.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"15 1","pages":"4356-4361"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Customized Object Recognition and Segmentation by One Shot Learning with Human Robot Interaction\",\"authors\":\"Ping Guo, Lidan Zhang, Lu Cao, Yingzhe Shen, Xuesong Shi, Haibing Ren, Yimin Zhang\",\"doi\":\"10.1109/ICRA.2019.8793845\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There are two difficulties to utilize state-of-the-art object recognition/detection/segmentation methods to robotic applications. First, most of the deep learning models heavily depend on large amounts of labeled training data, which are expensive to obtain for each individual application. Second, the object categories must be pre-defined in the dataset, thus not practical to scenarios with varying object categories. To alleviate the reliance on pre-defined big data, this paper proposes a customized object recognition and segmentation method. It aims to recognize and segment any object defined by the user, given only one annotation. There are three steps in the proposed method. First, the user takes an exemplar video of the target object with the robot, defines its name, and mask its boundary on only one frame. Then the robot automatically propagates the annotation through the exemplar video based on a proposed data generation method. In the meantime, a segmentation model continuously updates itself on the generated data. Finally, only a lightweight segmentation net is required at testing stage, to recognize and segment the user-defined object in any scenes.\",\"PeriodicalId\":6730,\"journal\":{\"name\":\"2019 International Conference on Robotics and Automation (ICRA)\",\"volume\":\"15 1\",\"pages\":\"4356-4361\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on Robotics and Automation (ICRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRA.2019.8793845\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA.2019.8793845","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Customized Object Recognition and Segmentation by One Shot Learning with Human Robot Interaction
There are two difficulties to utilize state-of-the-art object recognition/detection/segmentation methods to robotic applications. First, most of the deep learning models heavily depend on large amounts of labeled training data, which are expensive to obtain for each individual application. Second, the object categories must be pre-defined in the dataset, thus not practical to scenarios with varying object categories. To alleviate the reliance on pre-defined big data, this paper proposes a customized object recognition and segmentation method. It aims to recognize and segment any object defined by the user, given only one annotation. There are three steps in the proposed method. First, the user takes an exemplar video of the target object with the robot, defines its name, and mask its boundary on only one frame. Then the robot automatically propagates the annotation through the exemplar video based on a proposed data generation method. In the meantime, a segmentation model continuously updates itself on the generated data. Finally, only a lightweight segmentation net is required at testing stage, to recognize and segment the user-defined object in any scenes.