Jonas Stolle, Philip Arm, Mayank Mittal, Marco Hutter
{"title":"Perceptive Pedipulation with Local Obstacle Avoidance","authors":"Jonas Stolle, Philip Arm, Mayank Mittal, Marco Hutter","doi":"arxiv-2409.07195","DOIUrl":null,"url":null,"abstract":"Pedipulation leverages the feet of legged robots for mobile manipulation,\neliminating the need for dedicated robotic arms. While previous works have\nshowcased blind and task-specific pedipulation skills, they fail to account for\nstatic and dynamic obstacles in the environment. To address this limitation, we\nintroduce a reinforcement learning-based approach to train a whole-body\nobstacle-aware policy that tracks foot position commands while simultaneously\navoiding obstacles. Despite training the policy in only five different static\nscenarios in simulation, we show that it generalizes to unknown environments\nwith different numbers and types of obstacles. We analyze the performance of\nour method through a set of simulation experiments and successfully deploy the\nlearned policy on the ANYmal quadruped, demonstrating its capability to follow\nfoot commands while navigating around static and dynamic obstacles.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Pedipulation leverages the feet of legged robots for mobile manipulation,
eliminating the need for dedicated robotic arms. While previous works have
showcased blind and task-specific pedipulation skills, they fail to account for
static and dynamic obstacles in the environment. To address this limitation, we
introduce a reinforcement learning-based approach to train a whole-body
obstacle-aware policy that tracks foot position commands while simultaneously
avoiding obstacles. Despite training the policy in only five different static
scenarios in simulation, we show that it generalizes to unknown environments
with different numbers and types of obstacles. We analyze the performance of
our method through a set of simulation experiments and successfully deploy the
learned policy on the ANYmal quadruped, demonstrating its capability to follow
foot commands while navigating around static and dynamic obstacles.