Science Robotics最新文献

筛选
英文 中文
Locomotion as manipulation with ReachBot 用 ReachBot 将运动变为操控
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-04-17 DOI: 10.1126/scirobotics.adi9762
Tony G. Chen, Stephanie Newdick, Julia Di, Carlo Bosio, Nitin Ongole, Mathieu Lapôtre, Marco Pavone, Mark R. Cutkosky
{"title":"Locomotion as manipulation with ReachBot","authors":"Tony G. Chen, Stephanie Newdick, Julia Di, Carlo Bosio, Nitin Ongole, Mathieu Lapôtre, Marco Pavone, Mark R. Cutkosky","doi":"10.1126/scirobotics.adi9762","DOIUrl":"https://doi.org/10.1126/scirobotics.adi9762","url":null,"abstract":"Caves and lava tubes on the Moon and Mars are sites of geological and astrobiological interest but consist of terrain that is inaccessible with traditional robot locomotion. To support the exploration of these sites, we present ReachBot, a robot that uses extendable booms as appendages to manipulate itself with respect to irregular rock surfaces. The booms terminate in grippers equipped with microspines and provide ReachBot with a large workspace, allowing it to achieve force closure in enclosed spaces, such as the walls of a lava tube. To propel ReachBot, we present a contact-before-motion planner for nongaited legged locomotion that uses internal force control, similar to a multifingered hand, to keep its long, slender booms in tension. Motion planning also depends on finding and executing secure grips on rock features. We used a Monte Carlo simulation to inform gripper design and predict grasp strength and variability. In addition, we used a two-step perception system to identify possible grasp locations. To validate our approach and mechanisms under realistic conditions, we deployed a single ReachBot arm and gripper in a lava tube in the Mojave Desert. The field test confirmed that ReachBot will find many targets for secure grasps with the proposed kinematic design.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"100 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140607740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-world humanoid locomotion with reinforcement learning 利用强化学习实现真实世界的仿人运动
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-04-17 DOI: 10.1126/scirobotics.adi9579
Ilija Radosavovic, Tete Xiao, Bike Zhang, Trevor Darrell, Jitendra Malik, Koushil Sreenath
{"title":"Real-world humanoid locomotion with reinforcement learning","authors":"Ilija Radosavovic, Tete Xiao, Bike Zhang, Trevor Darrell, Jitendra Malik, Koushil Sreenath","doi":"10.1126/scirobotics.adi9579","DOIUrl":"https://doi.org/10.1126/scirobotics.adi9579","url":null,"abstract":"Humanoid robots that can autonomously operate in diverse environments have the potential to help address labor shortages in factories, assist elderly at home, and colonize new planets. Although classical controllers for humanoid robots have shown impressive results in a number of settings, they are challenging to generalize and adapt to new environments. Here, we present a fully learning-based approach for real-world humanoid locomotion. Our controller is a causal transformer that takes the history of proprioceptive observations and actions as input and predicts the next action. We hypothesized that the observation-action history contains useful information about the world that a powerful transformer model can use to adapt its behavior in context, without updating its weights. We trained our model with large-scale model-free reinforcement learning on an ensemble of randomized environments in simulation and deployed it to the real-world zero-shot. Our controller could walk over various outdoor terrains, was robust to external disturbances, and could adapt in context.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"4 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140607696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using machine learning and robotics to discover plastic substitutes 利用机器学习和机器人技术发现塑料替代品
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-04-17 DOI: 10.1126/scirobotics.adp7392
Melisa Yashinski
{"title":"Using machine learning and robotics to discover plastic substitutes","authors":"Melisa Yashinski","doi":"10.1126/scirobotics.adp7392","DOIUrl":"10.1126/scirobotics.adp7392","url":null,"abstract":"<div >Discovery of all-natural thin films with tunable properties was partially automated using robotics and machine learning.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 89","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140606604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An agile monopedal hopping quadcopter with synergistic hybrid locomotion 具有协同混合动力的敏捷单足跳四旋翼飞行器
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-04-10 DOI: 10.1126/scirobotics.adi8912
Songnan Bai, Qiqi Pan, Runze Ding, Huaiyuan Jia, Zhengbao Yang, Pakpong Chirarattananon
{"title":"An agile monopedal hopping quadcopter with synergistic hybrid locomotion","authors":"Songnan Bai, Qiqi Pan, Runze Ding, Huaiyuan Jia, Zhengbao Yang, Pakpong Chirarattananon","doi":"10.1126/scirobotics.adi8912","DOIUrl":"https://doi.org/10.1126/scirobotics.adi8912","url":null,"abstract":"Nature abounds with examples of superior mobility through the fusion of aerial and ground movement. Drawing inspiration from such multimodal locomotion, we introduce a high-performance hybrid hopping and flying robot. The proposed robot seamlessly integrates a nano quadcopter with a passive telescopic leg, overcoming limitations of previous jumping mechanisms that rely on stance phase leg actuation. Based on the identified dynamics, a thrust-based control method and detachable active aerodynamic surfaces were devised for the robot to perform continuous jumps with and without position feedback. This unique design and actuation strategy enable tuning of jump height and reduced stance phase duration, leading to agile hopping locomotion. The robot recorded an average vertical hopping speed of 2.38 meters per second at a jump height of 1.63 meters. By harnessing multimodal locomotion, the robot is capable of intermittent midflight jumps that result in substantial instantaneous accelerations and rapid changes in flight direction, offering enhanced agility and versatility in complex environments. The passive leg design holds potential for direct integration with conventional rotorcraft, unlocking seamless hybrid hopping and flying locomotion.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"19 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140544709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning agile soccer skills for a bipedal robot with deep reinforcement learning 利用深度强化学习为双足机器人学习敏捷足球技能
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-04-10 DOI: 10.1126/scirobotics.adi8022
Tuomas Haarnoja, Ben Moran, Guy Lever, Sandy H. Huang, Dhruva Tirumala, Jan Humplik, Markus Wulfmeier, Saran Tunyasuvunakool, Noah Y. Siegel, Roland Hafner, Michael Bloesch, Kristian Hartikainen, Arunkumar Byravan, Leonard Hasenclever, Yuval Tassa, Fereshteh Sadeghi, Nathan Batchelor, Federico Casarini, Stefano Saliceti, Charles Game, Neil Sreendra, Kushal Patel, Marlon Gwira, Andrea Huber, Nicole Hurley, Francesco Nori, Raia Hadsell, Nicolas Heess
{"title":"Learning agile soccer skills for a bipedal robot with deep reinforcement learning","authors":"Tuomas Haarnoja, Ben Moran, Guy Lever, Sandy H. Huang, Dhruva Tirumala, Jan Humplik, Markus Wulfmeier, Saran Tunyasuvunakool, Noah Y. Siegel, Roland Hafner, Michael Bloesch, Kristian Hartikainen, Arunkumar Byravan, Leonard Hasenclever, Yuval Tassa, Fereshteh Sadeghi, Nathan Batchelor, Federico Casarini, Stefano Saliceti, Charles Game, Neil Sreendra, Kushal Patel, Marlon Gwira, Andrea Huber, Nicole Hurley, Francesco Nori, Raia Hadsell, Nicolas Heess","doi":"10.1126/scirobotics.adi8022","DOIUrl":"https://doi.org/10.1126/scirobotics.adi8022","url":null,"abstract":"We investigated whether deep reinforcement learning (deep RL) is able to synthesize sophisticated and safe movement skills for a low-cost, miniature humanoid robot that can be composed into complex behavioral strategies. We used deep RL to train a humanoid robot to play a simplified one-versus-one soccer game. The resulting agent exhibits robust and dynamic movement skills, such as rapid fall recovery, walking, turning, and kicking, and it transitions between them in a smooth and efficient manner. It also learned to anticipate ball movements and block opponent shots. The agent’s tactical behavior adapts to specific game contexts in a way that would be impractical to manually design. Our agent was trained in simulation and transferred to real robots zero-shot. A combination of sufficiently high-frequency control, targeted dynamics randomization, and perturbations during training enabled good-quality transfer. In experiments, the agent walked 181% faster, turned 302% faster, took 63% less time to get up, and kicked a ball 34% faster than a scripted baseline.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"17 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140545009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Restoration of motor function using magnetoelectric metamaterials 利用磁电超材料恢复运动功能
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-27 DOI: 10.1126/scirobotics.adp3707
Amos Matsiko
{"title":"Restoration of motor function using magnetoelectric metamaterials","authors":"Amos Matsiko","doi":"10.1126/scirobotics.adp3707","DOIUrl":"10.1126/scirobotics.adp3707","url":null,"abstract":"<div >Implantable magnetic materials can be used for wireless neural stimulation and restoration of motor function.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 88","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Teaching robots the art of human social synchrony 教机器人学习人类社会同步的艺术
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-27 DOI: 10.1126/scirobotics.ado5755
Rachael E. Jack
{"title":"Teaching robots the art of human social synchrony","authors":"Rachael E. Jack","doi":"10.1126/scirobotics.ado5755","DOIUrl":"10.1126/scirobotics.ado5755","url":null,"abstract":"<div >Humanoid robots can now learn the art of social synchrony using neural networks.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 88","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fictional history of robotics features forgotten real-world robots 一部虚构的机器人历史,讲述了被遗忘的现实世界中的机器人。
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-27 DOI: 10.1126/scirobotics.ado7982
Robin R. Murphy
{"title":"A fictional history of robotics features forgotten real-world robots","authors":"Robin R. Murphy","doi":"10.1126/scirobotics.ado7982","DOIUrl":"10.1126/scirobotics.ado7982","url":null,"abstract":"<div >The science-fiction movie <i>The Creator</i> uses six real-world robots from the 1950s and 1960s to show progress in AI.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 88","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-robot facial coexpression 人类与机器人的面部共表情。
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-27 DOI: 10.1126/scirobotics.adi4724
Yuhang Hu, Boyuan Chen, Jiong Lin, Yunzhe Wang, Yingke Wang, Cameron Mehlman, Hod Lipson
{"title":"Human-robot facial coexpression","authors":"Yuhang Hu,&nbsp;Boyuan Chen,&nbsp;Jiong Lin,&nbsp;Yunzhe Wang,&nbsp;Yingke Wang,&nbsp;Cameron Mehlman,&nbsp;Hod Lipson","doi":"10.1126/scirobotics.adi4724","DOIUrl":"10.1126/scirobotics.adi4724","url":null,"abstract":"<div >Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge is twofold: First, the actuation of an expressively versatile robotic face is mechanically challenging. A second challenge is knowing what expression to generate so that the robot appears natural, timely, and genuine. Here, we propose that both barriers can be alleviated by training a robot to anticipate future facial expressions and execute them simultaneously with a human. Whereas delayed facial mimicry looks disingenuous, facial coexpression feels more genuine because it requires correct inference of the human’s emotional state for timely execution. We found that a robot can learn to predict a forthcoming smile about 839 milliseconds before the human smiles and, using a learned inverse kinematic facial self-model, coexpress the smile simultaneously with the human. We demonstrated this ability using a robot face comprising 26 degrees of freedom. We believe that the ability to coexpress simultaneous facial expressions could improve human-robot interaction.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 88","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.adi4724","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-robot facial coexpression 人类与机器人的面部表情
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-27 DOI: https://www.science.org/doi/10.1126/scirobotics.adi4724
Yuhang Hu, Boyuan Chen, Jiong Lin, Yunzhe Wang, Yingke Wang, Cameron Mehlman, Hod Lipson
{"title":"Human-robot facial coexpression","authors":"Yuhang Hu, Boyuan Chen, Jiong Lin, Yunzhe Wang, Yingke Wang, Cameron Mehlman, Hod Lipson","doi":"https://www.science.org/doi/10.1126/scirobotics.adi4724","DOIUrl":"https://doi.org/https://www.science.org/doi/10.1126/scirobotics.adi4724","url":null,"abstract":"Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge is twofold: First, the actuation of an expressively versatile robotic face is mechanically challenging. A second challenge is knowing what expression to generate so that the robot appears natural, timely, and genuine. Here, we propose that both barriers can be alleviated by training a robot to anticipate future facial expressions and execute them simultaneously with a human. Whereas delayed facial mimicry looks disingenuous, facial coexpression feels more genuine because it requires correct inference of the human’s emotional state for timely execution. We found that a robot can learn to predict a forthcoming smile about 839 milliseconds before the human smiles and, using a learned inverse kinematic facial self-model, coexpress the smile simultaneously with the human. We demonstrated this ability using a robot face comprising 26 degrees of freedom. We believe that the ability to coexpress simultaneous facial expressions could improve human-robot interaction.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"17 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140329178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信