Adhitha Dias, Hasitha Wellaboda, Yasod Rasanka, M. Munasinghe, R. Rodrigo, P. Jayasekara
{"title":"基于增强现实的人机交互深度学习,实现机器人团队自动化","authors":"Adhitha Dias, Hasitha Wellaboda, Yasod Rasanka, M. Munasinghe, R. Rodrigo, P. Jayasekara","doi":"10.1109/ICCAR49639.2020.9108004","DOIUrl":null,"url":null,"abstract":"Getting a team of robots to achieve a relatively complex task using manual manipulation through augmented reality (AR) is interesting. However, the true potential of such an approach manifests when the system can learn from humans. We propose a system comprising a team of robots that performs a previously unseen task—a variant, to be specific—by learning from the sequences of actions taken by multiple human beings doing this task in various ways using deep learning (DL). The training inputs can be through actual manipulation of the team of robots using an augmented-reality tablet or through a simulator. Results indicate that the system is able to fulfill the specified variant of the task more than 80% of the time, inaccuracies mainly owing to unrealistic specifications of tasks. This opens up an avenue of training a team of robots, instead of crafting a rule base.","PeriodicalId":412255,"journal":{"name":"2020 6th International Conference on Control, Automation and Robotics (ICCAR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Deep Learning of Augmented Reality based Human Interactions for Automating a Robot Team\",\"authors\":\"Adhitha Dias, Hasitha Wellaboda, Yasod Rasanka, M. Munasinghe, R. Rodrigo, P. Jayasekara\",\"doi\":\"10.1109/ICCAR49639.2020.9108004\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Getting a team of robots to achieve a relatively complex task using manual manipulation through augmented reality (AR) is interesting. However, the true potential of such an approach manifests when the system can learn from humans. We propose a system comprising a team of robots that performs a previously unseen task—a variant, to be specific—by learning from the sequences of actions taken by multiple human beings doing this task in various ways using deep learning (DL). The training inputs can be through actual manipulation of the team of robots using an augmented-reality tablet or through a simulator. Results indicate that the system is able to fulfill the specified variant of the task more than 80% of the time, inaccuracies mainly owing to unrealistic specifications of tasks. This opens up an avenue of training a team of robots, instead of crafting a rule base.\",\"PeriodicalId\":412255,\"journal\":{\"name\":\"2020 6th International Conference on Control, Automation and Robotics (ICCAR)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 6th International Conference on Control, Automation and Robotics (ICCAR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCAR49639.2020.9108004\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 6th International Conference on Control, Automation and Robotics (ICCAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCAR49639.2020.9108004","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Learning of Augmented Reality based Human Interactions for Automating a Robot Team
Getting a team of robots to achieve a relatively complex task using manual manipulation through augmented reality (AR) is interesting. However, the true potential of such an approach manifests when the system can learn from humans. We propose a system comprising a team of robots that performs a previously unseen task—a variant, to be specific—by learning from the sequences of actions taken by multiple human beings doing this task in various ways using deep learning (DL). The training inputs can be through actual manipulation of the team of robots using an augmented-reality tablet or through a simulator. Results indicate that the system is able to fulfill the specified variant of the task more than 80% of the time, inaccuracies mainly owing to unrealistic specifications of tasks. This opens up an avenue of training a team of robots, instead of crafting a rule base.