Science RoboticsPub Date : 2024-04-17DOI: 10.1126/scirobotics.adi9762
Tony G. Chen, Stephanie Newdick, Julia Di, Carlo Bosio, Nitin Ongole, Mathieu Lapôtre, Marco Pavone, Mark R. Cutkosky
{"title":"Locomotion as manipulation with ReachBot","authors":"Tony G. Chen, Stephanie Newdick, Julia Di, Carlo Bosio, Nitin Ongole, Mathieu Lapôtre, Marco Pavone, Mark R. Cutkosky","doi":"10.1126/scirobotics.adi9762","DOIUrl":"https://doi.org/10.1126/scirobotics.adi9762","url":null,"abstract":"Caves and lava tubes on the Moon and Mars are sites of geological and astrobiological interest but consist of terrain that is inaccessible with traditional robot locomotion. To support the exploration of these sites, we present ReachBot, a robot that uses extendable booms as appendages to manipulate itself with respect to irregular rock surfaces. The booms terminate in grippers equipped with microspines and provide ReachBot with a large workspace, allowing it to achieve force closure in enclosed spaces, such as the walls of a lava tube. To propel ReachBot, we present a contact-before-motion planner for nongaited legged locomotion that uses internal force control, similar to a multifingered hand, to keep its long, slender booms in tension. Motion planning also depends on finding and executing secure grips on rock features. We used a Monte Carlo simulation to inform gripper design and predict grasp strength and variability. In addition, we used a two-step perception system to identify possible grasp locations. To validate our approach and mechanisms under realistic conditions, we deployed a single ReachBot arm and gripper in a lava tube in the Mojave Desert. The field test confirmed that ReachBot will find many targets for secure grasps with the proposed kinematic design.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"100 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140607740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-world humanoid locomotion with reinforcement learning","authors":"Ilija Radosavovic, Tete Xiao, Bike Zhang, Trevor Darrell, Jitendra Malik, Koushil Sreenath","doi":"10.1126/scirobotics.adi9579","DOIUrl":"https://doi.org/10.1126/scirobotics.adi9579","url":null,"abstract":"Humanoid robots that can autonomously operate in diverse environments have the potential to help address labor shortages in factories, assist elderly at home, and colonize new planets. Although classical controllers for humanoid robots have shown impressive results in a number of settings, they are challenging to generalize and adapt to new environments. Here, we present a fully learning-based approach for real-world humanoid locomotion. Our controller is a causal transformer that takes the history of proprioceptive observations and actions as input and predicts the next action. We hypothesized that the observation-action history contains useful information about the world that a powerful transformer model can use to adapt its behavior in context, without updating its weights. We trained our model with large-scale model-free reinforcement learning on an ensemble of randomized environments in simulation and deployed it to the real-world zero-shot. Our controller could walk over various outdoor terrains, was robust to external disturbances, and could adapt in context.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"4 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140607696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2024-04-17DOI: 10.1126/scirobotics.adp7392
Melisa Yashinski
{"title":"Using machine learning and robotics to discover plastic substitutes","authors":"Melisa Yashinski","doi":"10.1126/scirobotics.adp7392","DOIUrl":"10.1126/scirobotics.adp7392","url":null,"abstract":"<div >Discovery of all-natural thin films with tunable properties was partially automated using robotics and machine learning.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 89","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140606604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An agile monopedal hopping quadcopter with synergistic hybrid locomotion","authors":"Songnan Bai, Qiqi Pan, Runze Ding, Huaiyuan Jia, Zhengbao Yang, Pakpong Chirarattananon","doi":"10.1126/scirobotics.adi8912","DOIUrl":"https://doi.org/10.1126/scirobotics.adi8912","url":null,"abstract":"Nature abounds with examples of superior mobility through the fusion of aerial and ground movement. Drawing inspiration from such multimodal locomotion, we introduce a high-performance hybrid hopping and flying robot. The proposed robot seamlessly integrates a nano quadcopter with a passive telescopic leg, overcoming limitations of previous jumping mechanisms that rely on stance phase leg actuation. Based on the identified dynamics, a thrust-based control method and detachable active aerodynamic surfaces were devised for the robot to perform continuous jumps with and without position feedback. This unique design and actuation strategy enable tuning of jump height and reduced stance phase duration, leading to agile hopping locomotion. The robot recorded an average vertical hopping speed of 2.38 meters per second at a jump height of 1.63 meters. By harnessing multimodal locomotion, the robot is capable of intermittent midflight jumps that result in substantial instantaneous accelerations and rapid changes in flight direction, offering enhanced agility and versatility in complex environments. The passive leg design holds potential for direct integration with conventional rotorcraft, unlocking seamless hybrid hopping and flying locomotion.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"19 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140544709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2024-04-10DOI: 10.1126/scirobotics.adi8022
Tuomas Haarnoja, Ben Moran, Guy Lever, Sandy H. Huang, Dhruva Tirumala, Jan Humplik, Markus Wulfmeier, Saran Tunyasuvunakool, Noah Y. Siegel, Roland Hafner, Michael Bloesch, Kristian Hartikainen, Arunkumar Byravan, Leonard Hasenclever, Yuval Tassa, Fereshteh Sadeghi, Nathan Batchelor, Federico Casarini, Stefano Saliceti, Charles Game, Neil Sreendra, Kushal Patel, Marlon Gwira, Andrea Huber, Nicole Hurley, Francesco Nori, Raia Hadsell, Nicolas Heess
{"title":"Learning agile soccer skills for a bipedal robot with deep reinforcement learning","authors":"Tuomas Haarnoja, Ben Moran, Guy Lever, Sandy H. Huang, Dhruva Tirumala, Jan Humplik, Markus Wulfmeier, Saran Tunyasuvunakool, Noah Y. Siegel, Roland Hafner, Michael Bloesch, Kristian Hartikainen, Arunkumar Byravan, Leonard Hasenclever, Yuval Tassa, Fereshteh Sadeghi, Nathan Batchelor, Federico Casarini, Stefano Saliceti, Charles Game, Neil Sreendra, Kushal Patel, Marlon Gwira, Andrea Huber, Nicole Hurley, Francesco Nori, Raia Hadsell, Nicolas Heess","doi":"10.1126/scirobotics.adi8022","DOIUrl":"https://doi.org/10.1126/scirobotics.adi8022","url":null,"abstract":"We investigated whether deep reinforcement learning (deep RL) is able to synthesize sophisticated and safe movement skills for a low-cost, miniature humanoid robot that can be composed into complex behavioral strategies. We used deep RL to train a humanoid robot to play a simplified one-versus-one soccer game. The resulting agent exhibits robust and dynamic movement skills, such as rapid fall recovery, walking, turning, and kicking, and it transitions between them in a smooth and efficient manner. It also learned to anticipate ball movements and block opponent shots. The agent’s tactical behavior adapts to specific game contexts in a way that would be impractical to manually design. Our agent was trained in simulation and transferred to real robots zero-shot. A combination of sufficiently high-frequency control, targeted dynamics randomization, and perturbations during training enabled good-quality transfer. In experiments, the agent walked 181% faster, turned 302% faster, took 63% less time to get up, and kicked a ball 34% faster than a scripted baseline.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"17 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140545009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2024-03-27DOI: 10.1126/scirobotics.adp3707
Amos Matsiko
{"title":"Restoration of motor function using magnetoelectric metamaterials","authors":"Amos Matsiko","doi":"10.1126/scirobotics.adp3707","DOIUrl":"10.1126/scirobotics.adp3707","url":null,"abstract":"<div >Implantable magnetic materials can be used for wireless neural stimulation and restoration of motor function.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 88","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2024-03-27DOI: 10.1126/scirobotics.ado5755
Rachael E. Jack
{"title":"Teaching robots the art of human social synchrony","authors":"Rachael E. Jack","doi":"10.1126/scirobotics.ado5755","DOIUrl":"10.1126/scirobotics.ado5755","url":null,"abstract":"<div >Humanoid robots can now learn the art of social synchrony using neural networks.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 88","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2024-03-27DOI: 10.1126/scirobotics.ado7982
Robin R. Murphy
{"title":"A fictional history of robotics features forgotten real-world robots","authors":"Robin R. Murphy","doi":"10.1126/scirobotics.ado7982","DOIUrl":"10.1126/scirobotics.ado7982","url":null,"abstract":"<div >The science-fiction movie <i>The Creator</i> uses six real-world robots from the 1950s and 1960s to show progress in AI.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 88","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human-robot facial coexpression","authors":"Yuhang Hu, Boyuan Chen, Jiong Lin, Yunzhe Wang, Yingke Wang, Cameron Mehlman, Hod Lipson","doi":"10.1126/scirobotics.adi4724","DOIUrl":"10.1126/scirobotics.adi4724","url":null,"abstract":"<div >Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge is twofold: First, the actuation of an expressively versatile robotic face is mechanically challenging. A second challenge is knowing what expression to generate so that the robot appears natural, timely, and genuine. Here, we propose that both barriers can be alleviated by training a robot to anticipate future facial expressions and execute them simultaneously with a human. Whereas delayed facial mimicry looks disingenuous, facial coexpression feels more genuine because it requires correct inference of the human’s emotional state for timely execution. We found that a robot can learn to predict a forthcoming smile about 839 milliseconds before the human smiles and, using a learned inverse kinematic facial self-model, coexpress the smile simultaneously with the human. We demonstrated this ability using a robot face comprising 26 degrees of freedom. We believe that the ability to coexpress simultaneous facial expressions could improve human-robot interaction.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 88","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.adi4724","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human-robot facial coexpression","authors":"Yuhang Hu, Boyuan Chen, Jiong Lin, Yunzhe Wang, Yingke Wang, Cameron Mehlman, Hod Lipson","doi":"https://www.science.org/doi/10.1126/scirobotics.adi4724","DOIUrl":"https://doi.org/https://www.science.org/doi/10.1126/scirobotics.adi4724","url":null,"abstract":"Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge is twofold: First, the actuation of an expressively versatile robotic face is mechanically challenging. A second challenge is knowing what expression to generate so that the robot appears natural, timely, and genuine. Here, we propose that both barriers can be alleviated by training a robot to anticipate future facial expressions and execute them simultaneously with a human. Whereas delayed facial mimicry looks disingenuous, facial coexpression feels more genuine because it requires correct inference of the human’s emotional state for timely execution. We found that a robot can learn to predict a forthcoming smile about 839 milliseconds before the human smiles and, using a learned inverse kinematic facial self-model, coexpress the smile simultaneously with the human. We demonstrated this ability using a robot face comprising 26 degrees of freedom. We believe that the ability to coexpress simultaneous facial expressions could improve human-robot interaction.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"17 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140329178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}