{"title":"Enhancing Feature Selection in Single Shot Robot Learning by Using Multi-Modal Inputs","authors":"Christian Groth","doi":"10.1109/AI4I51902.2021.00010","DOIUrl":null,"url":null,"abstract":"To provide robots for a wide range of users, there needs to be an easy and intuitive way to program them. This issue is addressed by the robot programming by demonstration or imitation learning paradigm, where the user demonstrates the task to the robot by teleoperation. Although single-shot approaches could save a lot of time and effort, they are still a niche due to some drawbacks, like ambiguities in selecting the relevant features.In this work we try to enhance a single shot programming by demonstration approach on sub-symbolic level by extending it to a multi modal input. While most approaches mainly focus on the trajectories and visual detection of objects, we combine speech and kinestethic teaching in order to resolve ambiguities and to rise the level of transferred information.","PeriodicalId":114373,"journal":{"name":"2021 4th International Conference on Artificial Intelligence for Industries (AI4I)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 4th International Conference on Artificial Intelligence for Industries (AI4I)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AI4I51902.2021.00010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
To provide robots for a wide range of users, there needs to be an easy and intuitive way to program them. This issue is addressed by the robot programming by demonstration or imitation learning paradigm, where the user demonstrates the task to the robot by teleoperation. Although single-shot approaches could save a lot of time and effort, they are still a niche due to some drawbacks, like ambiguities in selecting the relevant features.In this work we try to enhance a single shot programming by demonstration approach on sub-symbolic level by extending it to a multi modal input. While most approaches mainly focus on the trajectories and visual detection of objects, we combine speech and kinestethic teaching in order to resolve ambiguities and to rise the level of transferred information.