Science Robotics最新文献

筛选
英文 中文
An agile monopedal hopping quadcopter with synergistic hybrid locomotion 具有协同混合动力的敏捷单足跳四旋翼飞行器
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-04-10 DOI: 10.1126/scirobotics.adi8912
Songnan Bai, Qiqi Pan, Runze Ding, Huaiyuan Jia, Zhengbao Yang, Pakpong Chirarattananon
{"title":"An agile monopedal hopping quadcopter with synergistic hybrid locomotion","authors":"Songnan Bai, Qiqi Pan, Runze Ding, Huaiyuan Jia, Zhengbao Yang, Pakpong Chirarattananon","doi":"10.1126/scirobotics.adi8912","DOIUrl":"https://doi.org/10.1126/scirobotics.adi8912","url":null,"abstract":"Nature abounds with examples of superior mobility through the fusion of aerial and ground movement. Drawing inspiration from such multimodal locomotion, we introduce a high-performance hybrid hopping and flying robot. The proposed robot seamlessly integrates a nano quadcopter with a passive telescopic leg, overcoming limitations of previous jumping mechanisms that rely on stance phase leg actuation. Based on the identified dynamics, a thrust-based control method and detachable active aerodynamic surfaces were devised for the robot to perform continuous jumps with and without position feedback. This unique design and actuation strategy enable tuning of jump height and reduced stance phase duration, leading to agile hopping locomotion. The robot recorded an average vertical hopping speed of 2.38 meters per second at a jump height of 1.63 meters. By harnessing multimodal locomotion, the robot is capable of intermittent midflight jumps that result in substantial instantaneous accelerations and rapid changes in flight direction, offering enhanced agility and versatility in complex environments. The passive leg design holds potential for direct integration with conventional rotorcraft, unlocking seamless hybrid hopping and flying locomotion.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":null,"pages":null},"PeriodicalIF":25.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140544709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning agile soccer skills for a bipedal robot with deep reinforcement learning 利用深度强化学习为双足机器人学习敏捷足球技能
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-04-10 DOI: 10.1126/scirobotics.adi8022
Tuomas Haarnoja, Ben Moran, Guy Lever, Sandy H. Huang, Dhruva Tirumala, Jan Humplik, Markus Wulfmeier, Saran Tunyasuvunakool, Noah Y. Siegel, Roland Hafner, Michael Bloesch, Kristian Hartikainen, Arunkumar Byravan, Leonard Hasenclever, Yuval Tassa, Fereshteh Sadeghi, Nathan Batchelor, Federico Casarini, Stefano Saliceti, Charles Game, Neil Sreendra, Kushal Patel, Marlon Gwira, Andrea Huber, Nicole Hurley, Francesco Nori, Raia Hadsell, Nicolas Heess
{"title":"Learning agile soccer skills for a bipedal robot with deep reinforcement learning","authors":"Tuomas Haarnoja, Ben Moran, Guy Lever, Sandy H. Huang, Dhruva Tirumala, Jan Humplik, Markus Wulfmeier, Saran Tunyasuvunakool, Noah Y. Siegel, Roland Hafner, Michael Bloesch, Kristian Hartikainen, Arunkumar Byravan, Leonard Hasenclever, Yuval Tassa, Fereshteh Sadeghi, Nathan Batchelor, Federico Casarini, Stefano Saliceti, Charles Game, Neil Sreendra, Kushal Patel, Marlon Gwira, Andrea Huber, Nicole Hurley, Francesco Nori, Raia Hadsell, Nicolas Heess","doi":"10.1126/scirobotics.adi8022","DOIUrl":"https://doi.org/10.1126/scirobotics.adi8022","url":null,"abstract":"We investigated whether deep reinforcement learning (deep RL) is able to synthesize sophisticated and safe movement skills for a low-cost, miniature humanoid robot that can be composed into complex behavioral strategies. We used deep RL to train a humanoid robot to play a simplified one-versus-one soccer game. The resulting agent exhibits robust and dynamic movement skills, such as rapid fall recovery, walking, turning, and kicking, and it transitions between them in a smooth and efficient manner. It also learned to anticipate ball movements and block opponent shots. The agent’s tactical behavior adapts to specific game contexts in a way that would be impractical to manually design. Our agent was trained in simulation and transferred to real robots zero-shot. A combination of sufficiently high-frequency control, targeted dynamics randomization, and perturbations during training enabled good-quality transfer. In experiments, the agent walked 181% faster, turned 302% faster, took 63% less time to get up, and kicked a ball 34% faster than a scripted baseline.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":null,"pages":null},"PeriodicalIF":25.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140545009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Restoration of motor function using magnetoelectric metamaterials 利用磁电超材料恢复运动功能
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-27 DOI: 10.1126/scirobotics.adp3707
Amos Matsiko
{"title":"Restoration of motor function using magnetoelectric metamaterials","authors":"Amos Matsiko","doi":"10.1126/scirobotics.adp3707","DOIUrl":"10.1126/scirobotics.adp3707","url":null,"abstract":"<div >Implantable magnetic materials can be used for wireless neural stimulation and restoration of motor function.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":null,"pages":null},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Teaching robots the art of human social synchrony 教机器人学习人类社会同步的艺术
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-27 DOI: 10.1126/scirobotics.ado5755
Rachael E. Jack
{"title":"Teaching robots the art of human social synchrony","authors":"Rachael E. Jack","doi":"10.1126/scirobotics.ado5755","DOIUrl":"10.1126/scirobotics.ado5755","url":null,"abstract":"<div >Humanoid robots can now learn the art of social synchrony using neural networks.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":null,"pages":null},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fictional history of robotics features forgotten real-world robots 一部虚构的机器人历史,讲述了被遗忘的现实世界中的机器人。
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-27 DOI: 10.1126/scirobotics.ado7982
Robin R. Murphy
{"title":"A fictional history of robotics features forgotten real-world robots","authors":"Robin R. Murphy","doi":"10.1126/scirobotics.ado7982","DOIUrl":"10.1126/scirobotics.ado7982","url":null,"abstract":"<div >The science-fiction movie <i>The Creator</i> uses six real-world robots from the 1950s and 1960s to show progress in AI.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":null,"pages":null},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-robot facial coexpression 人类与机器人的面部共表情。
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-27 DOI: 10.1126/scirobotics.adi4724
Yuhang Hu, Boyuan Chen, Jiong Lin, Yunzhe Wang, Yingke Wang, Cameron Mehlman, Hod Lipson
{"title":"Human-robot facial coexpression","authors":"Yuhang Hu,&nbsp;Boyuan Chen,&nbsp;Jiong Lin,&nbsp;Yunzhe Wang,&nbsp;Yingke Wang,&nbsp;Cameron Mehlman,&nbsp;Hod Lipson","doi":"10.1126/scirobotics.adi4724","DOIUrl":"10.1126/scirobotics.adi4724","url":null,"abstract":"<div >Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge is twofold: First, the actuation of an expressively versatile robotic face is mechanically challenging. A second challenge is knowing what expression to generate so that the robot appears natural, timely, and genuine. Here, we propose that both barriers can be alleviated by training a robot to anticipate future facial expressions and execute them simultaneously with a human. Whereas delayed facial mimicry looks disingenuous, facial coexpression feels more genuine because it requires correct inference of the human’s emotional state for timely execution. We found that a robot can learn to predict a forthcoming smile about 839 milliseconds before the human smiles and, using a learned inverse kinematic facial self-model, coexpress the smile simultaneously with the human. We demonstrated this ability using a robot face comprising 26 degrees of freedom. We believe that the ability to coexpress simultaneous facial expressions could improve human-robot interaction.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":null,"pages":null},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.adi4724","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-robot facial coexpression 人类与机器人的面部表情
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-27 DOI: https://www.science.org/doi/10.1126/scirobotics.adi4724
Yuhang Hu, Boyuan Chen, Jiong Lin, Yunzhe Wang, Yingke Wang, Cameron Mehlman, Hod Lipson
{"title":"Human-robot facial coexpression","authors":"Yuhang Hu, Boyuan Chen, Jiong Lin, Yunzhe Wang, Yingke Wang, Cameron Mehlman, Hod Lipson","doi":"https://www.science.org/doi/10.1126/scirobotics.adi4724","DOIUrl":"https://doi.org/https://www.science.org/doi/10.1126/scirobotics.adi4724","url":null,"abstract":"Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge is twofold: First, the actuation of an expressively versatile robotic face is mechanically challenging. A second challenge is knowing what expression to generate so that the robot appears natural, timely, and genuine. Here, we propose that both barriers can be alleviated by training a robot to anticipate future facial expressions and execute them simultaneously with a human. Whereas delayed facial mimicry looks disingenuous, facial coexpression feels more genuine because it requires correct inference of the human’s emotional state for timely execution. We found that a robot can learn to predict a forthcoming smile about 839 milliseconds before the human smiles and, using a learned inverse kinematic facial self-model, coexpress the smile simultaneously with the human. We demonstrated this ability using a robot face comprising 26 degrees of freedom. We believe that the ability to coexpress simultaneous facial expressions could improve human-robot interaction.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":null,"pages":null},"PeriodicalIF":25.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140329178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating human joint moments unifies exoskeleton control, reducing user effort 估算人体关节力矩可统一外骨骼控制,减少使用者的工作量。
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-20 DOI: 10.1126/scirobotics.adi8852
Dean D. Molinaro, Inseung Kang, Aaron J. Young
{"title":"Estimating human joint moments unifies exoskeleton control, reducing user effort","authors":"Dean D. Molinaro,&nbsp;Inseung Kang,&nbsp;Aaron J. Young","doi":"10.1126/scirobotics.adi8852","DOIUrl":"10.1126/scirobotics.adi8852","url":null,"abstract":"<div >Robotic lower-limb exoskeletons can augment human mobility, but current systems require extensive, context-specific considerations, limiting their real-world viability. Here, we present a unified exoskeleton control framework that autonomously adapts assistance on the basis of instantaneous user joint moment estimates from a temporal convolutional network (TCN). When deployed on our hip exoskeleton, the TCN achieved an average root mean square error of 0.142 newton-meters per kilogram across 35 ambulatory conditions without any user-specific calibration. Further, the unified controller significantly reduced user metabolic cost and lower-limb positive work during level-ground and incline walking compared with walking without wearing the exoskeleton. This advancement bridges the gap between in-lab exoskeleton technology and real-world human ambulation, making exoskeleton control technology viable for a broad community.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":null,"pages":null},"PeriodicalIF":25.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140178675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating human joint moments unifies exoskeleton control, reducing user effort 估算人体关节力矩统一外骨骼控制,减少使用者的工作量
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-20 DOI: https://www.science.org/doi/10.1126/scirobotics.adi8852
Dean D. Molinaro, Inseung Kang, Aaron J. Young
{"title":"Estimating human joint moments unifies exoskeleton control, reducing user effort","authors":"Dean D. Molinaro, Inseung Kang, Aaron J. Young","doi":"https://www.science.org/doi/10.1126/scirobotics.adi8852","DOIUrl":"https://doi.org/https://www.science.org/doi/10.1126/scirobotics.adi8852","url":null,"abstract":"Robotic lower-limb exoskeletons can augment human mobility, but current systems require extensive, context-specific considerations, limiting their real-world viability. Here, we present a unified exoskeleton control framework that autonomously adapts assistance on the basis of instantaneous user joint moment estimates from a temporal convolutional network (TCN). When deployed on our hip exoskeleton, the TCN achieved an average root mean square error of 0.142 newton-meters per kilogram across 35 ambulatory conditions without any user-specific calibration. Further, the unified controller significantly reduced user metabolic cost and lower-limb positive work during level-ground and incline walking compared with walking without wearing the exoskeleton. This advancement bridges the gap between in-lab exoskeleton technology and real-world human ambulation, making exoskeleton control technology viable for a broad community.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":null,"pages":null},"PeriodicalIF":25.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140192691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elastic energy-recycling actuators for efficient robots 用于高效机器人的弹性能量回收执行器
IF 25 1区 计算机科学
Science Robotics Pub Date : 2024-03-20 DOI: 10.1126/scirobotics.adj7246
Erez Krimsky, Steven H. Collins
{"title":"Elastic energy-recycling actuators for efficient robots","authors":"Erez Krimsky,&nbsp;Steven H. Collins","doi":"10.1126/scirobotics.adj7246","DOIUrl":"10.1126/scirobotics.adj7246","url":null,"abstract":"<div >Electric motors are widely used in robots but waste energy in many applications. We introduce an elastic energy-recycling actuator that maintains the versatility of motors while improving energy efficiency in cyclic tasks. The actuator comprises a motor in parallel with an array of springs that can be individually engaged and disengaged, while retaining stored energy, by pairs of low-power electroadhesive clutches. We developed a prototype actuator and tested it in five repetitive tasks with features common in robotic applications but difficult to perform efficiently. The actuator reduced power consumption by at least 50% in all cases and by 97% in the best case. Elastic energy recovery, controlled by low-power clutches, can improve the efficiency of mobile robots, assistive devices, and other engineered systems.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":null,"pages":null},"PeriodicalIF":25.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.adj7246","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140178674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信