{"title":"Generalized Robot Learning Framework","authors":"Jiahuan Yan, Zhouyang Hong, Yu Zhao, Yu Tian, Yunxin Liu, Travis Davies, Luhui Hu","doi":"arxiv-2409.12061","DOIUrl":null,"url":null,"abstract":"Imitation based robot learning has recently gained significant attention in\nthe robotics field due to its theoretical potential for transferability and\ngeneralizability. However, it remains notoriously costly, both in terms of\nhardware and data collection, and deploying it in real-world environments\ndemands meticulous setup of robots and precise experimental conditions. In this\npaper, we present a low-cost robot learning framework that is both easily\nreproducible and transferable to various robots and environments. We\ndemonstrate that deployable imitation learning can be successfully applied even\nto industrial-grade robots, not just expensive collaborative robotic arms.\nFurthermore, our results show that multi-task robot learning is achievable with\nsimple network architectures and fewer demonstrations than previously thought\nnecessary. As the current evaluating method is almost subjective when it comes\nto real-world manipulation tasks, we propose Voting Positive Rate (VPR) - a\nnovel evaluation strategy that provides a more objective assessment of\nperformance. We conduct an extensive comparison of success rates across various\nself-designed tasks to validate our approach. To foster collaboration and\nsupport the robot learning community, we have open-sourced all relevant\ndatasets and model checkpoints, available at huggingface.co/ZhiChengAI.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12061","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Imitation based robot learning has recently gained significant attention in
the robotics field due to its theoretical potential for transferability and
generalizability. However, it remains notoriously costly, both in terms of
hardware and data collection, and deploying it in real-world environments
demands meticulous setup of robots and precise experimental conditions. In this
paper, we present a low-cost robot learning framework that is both easily
reproducible and transferable to various robots and environments. We
demonstrate that deployable imitation learning can be successfully applied even
to industrial-grade robots, not just expensive collaborative robotic arms.
Furthermore, our results show that multi-task robot learning is achievable with
simple network architectures and fewer demonstrations than previously thought
necessary. As the current evaluating method is almost subjective when it comes
to real-world manipulation tasks, we propose Voting Positive Rate (VPR) - a
novel evaluation strategy that provides a more objective assessment of
performance. We conduct an extensive comparison of success rates across various
self-designed tasks to validate our approach. To foster collaboration and
support the robot learning community, we have open-sourced all relevant
datasets and model checkpoints, available at huggingface.co/ZhiChengAI.