{"title":"示范在公园散步:用无模型强化学习在20分钟内学会走路","authors":"Laura M. Smith, Ilya Kostrikov, S. Levine","doi":"10.15607/RSS.2023.XIX.056","DOIUrl":null,"url":null,"abstract":"—Deep reinforcement learning is a promising ap- proach to learning policies in unstructured environments. Due to its sample inefficiency, though, deep RL applications have primarily focused on simulated environments. In this work, we demonstrate that the recent advancements in machine learning algorithms and libraries combined with careful MDP formulation lead to learning quadruped locomotion in only 20 minutes in the real world. We evaluate our approach on several indoor and outdoor terrains that are known to be challenging for classical, model-based controllers and observe that the robot consistently learns a walking gait on all of these terrains. Finally, we evaluate our design decisions in a simulated environment. We provide videos of all real-world training and code to reproduce our results on our website: https://sites.google.com/berkeley.","PeriodicalId":248720,"journal":{"name":"Robotics: Science and Systems XIX","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":"{\"title\":\"Demonstrating A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free Reinforcement Learning\",\"authors\":\"Laura M. Smith, Ilya Kostrikov, S. Levine\",\"doi\":\"10.15607/RSS.2023.XIX.056\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"—Deep reinforcement learning is a promising ap- proach to learning policies in unstructured environments. Due to its sample inefficiency, though, deep RL applications have primarily focused on simulated environments. In this work, we demonstrate that the recent advancements in machine learning algorithms and libraries combined with careful MDP formulation lead to learning quadruped locomotion in only 20 minutes in the real world. We evaluate our approach on several indoor and outdoor terrains that are known to be challenging for classical, model-based controllers and observe that the robot consistently learns a walking gait on all of these terrains. Finally, we evaluate our design decisions in a simulated environment. We provide videos of all real-world training and code to reproduce our results on our website: https://sites.google.com/berkeley.\",\"PeriodicalId\":248720,\"journal\":{\"name\":\"Robotics: Science and Systems XIX\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"26\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Robotics: Science and Systems XIX\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.15607/RSS.2023.XIX.056\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics: Science and Systems XIX","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15607/RSS.2023.XIX.056","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Demonstrating A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free Reinforcement Learning
—Deep reinforcement learning is a promising ap- proach to learning policies in unstructured environments. Due to its sample inefficiency, though, deep RL applications have primarily focused on simulated environments. In this work, we demonstrate that the recent advancements in machine learning algorithms and libraries combined with careful MDP formulation lead to learning quadruped locomotion in only 20 minutes in the real world. We evaluate our approach on several indoor and outdoor terrains that are known to be challenging for classical, model-based controllers and observe that the robot consistently learns a walking gait on all of these terrains. Finally, we evaluate our design decisions in a simulated environment. We provide videos of all real-world training and code to reproduce our results on our website: https://sites.google.com/berkeley.