Science RoboticsPub Date : 2025-09-03DOI: 10.1126/scirobotics.adu5830
Hyegi Min, Yue Wang, Jiaojiao Wang, Xiuyuan Li, Woong Kim, Onur Aydin, Sehong Kang, Jae-Sung You, Jongwon Lim, Katy Wolhaupter, Yikang Xu, Zhengguang Zhu, Jianyu Gu, Xinming Li, Yongdeok Kim, Tarun Rao, Hyun Joon Kong, Taher A. Saif, Yonggang Huang, John A. Rogers, Rashid Bashir
{"title":"Optogenetic neuromuscular actuation of a miniature electronic biohybrid robot","authors":"Hyegi Min, Yue Wang, Jiaojiao Wang, Xiuyuan Li, Woong Kim, Onur Aydin, Sehong Kang, Jae-Sung You, Jongwon Lim, Katy Wolhaupter, Yikang Xu, Zhengguang Zhu, Jianyu Gu, Xinming Li, Yongdeok Kim, Tarun Rao, Hyun Joon Kong, Taher A. Saif, Yonggang Huang, John A. Rogers, Rashid Bashir","doi":"10.1126/scirobotics.adu5830","DOIUrl":"10.1126/scirobotics.adu5830","url":null,"abstract":"<div >Neuronal control of skeletal muscle function is ubiquitous across species for locomotion and doing work. In particular, emergent behaviors of neurons in biohybrid neuromuscular systems can advance bioinspired locomotion research. Although recent studies have demonstrated that chemical or optogenetic stimulation of neurons can control muscular actuation through the neuromuscular junction (NMJ), the correlation between neuronal activities and resulting modulation in the muscle responses is less understood, hindering the engineering of high-level functional biohybrid systems. Here, we developed NMJ-based biohybrid crawling robots with optogenetic mouse motor neurons, skeletal muscles, 3D-printed hydrogel scaffolds, and integrated onboard wireless micro–light-emitting diode (μLED)–based optoelectronics. We investigated the coupling of the light stimulation and neuromuscular actuation through power spectral density (PSD) analysis. We verified the modulation of the mechanical functionality of the robot depending on the frequency of the optical stimulation to the neural tissue. We demonstrated continued muscle contraction up to 20 minutes after a 1-minute-long pulsed 2-hertz optical stimulation of the neural tissue. Furthermore, the robots were shown to maintain their mechanical functionality for more than 2 weeks. This study provides insights into reliable neuronal control with optoelectronics, supporting advancements in neuronal modulation, biohybrid intelligence, and automation.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 106","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2025-09-03DOI: 10.1126/scirobotics.ads1204
Matthew Lai, Keegan Go, Zhibin Li, Torsten Kröger, Stefan Schaal, Kelsey Allen, Jonathan Scholz
{"title":"RoboBallet: Planning for multirobot reaching with graph neural networks and reinforcement learning","authors":"Matthew Lai, Keegan Go, Zhibin Li, Torsten Kröger, Stefan Schaal, Kelsey Allen, Jonathan Scholz","doi":"10.1126/scirobotics.ads1204","DOIUrl":"10.1126/scirobotics.ads1204","url":null,"abstract":"<div >Modern robotic manufacturing requires collision-free coordination of multiple robots to complete numerous tasks in shared, obstacle-rich workspaces. Although individual tasks may be simple in isolation, automated joint task allocation, scheduling, and motion planning under spatiotemporal constraints remain computationally intractable for classical methods at real-world scales. Existing multiarm systems deployed in industry rely on human intuition and experience to design feasible trajectories manually in a labor-intensive process. To address this challenge, we propose a reinforcement learning (RL) framework to achieve automated task and motion planning, tested in an obstacle-rich environment with eight robots performing 40 reaching tasks in a shared workspace, where any robot can perform any task in any order. Our approach builds on a graph neural network (GNN) policy trained via RL on procedurally generated environments with diverse obstacle layouts, robot configurations, and task distributions. It uses a graph representation of scenes and a graph policy neural network trained through RL to generate trajectories of multiple robots, jointly solving the subproblems of task allocation, scheduling, and motion planning. Trained on large randomly generated task sets in simulation, our policy generalizes zero-shot to unseen settings with varying robot placements, obstacle geometries, and task poses. We further demonstrate that the high-speed capability of our solution enables its use in workcell layout optimization, improving solution times. The speed and scalability of our planner also open the door to capabilities such as fault-tolerant planning and online perception-based replanning, where rapid adaptation to dynamic task sets is required.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 106","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2025-08-27DOI: 10.1126/scirobotics.aea7390
Ken Goldberg
{"title":"Good old-fashioned engineering can close the 100,000-year “data gap” in robotics","authors":"Ken Goldberg","doi":"10.1126/scirobotics.aea7390","DOIUrl":"10.1126/scirobotics.aea7390","url":null,"abstract":"","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.aea7390","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144910562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2025-08-27DOI: 10.1126/scirobotics.aea7897
Nancy M. Amato, Seth Hutchinson, Animesh Garg, Aude Billard, Daniela Rus, Russ Tedrake, Frank Park, Ken Goldberg
{"title":"“Data will solve robotics and automation: True or false?”: A debate","authors":"Nancy M. Amato, Seth Hutchinson, Animesh Garg, Aude Billard, Daniela Rus, Russ Tedrake, Frank Park, Ken Goldberg","doi":"10.1126/scirobotics.aea7897","DOIUrl":"10.1126/scirobotics.aea7897","url":null,"abstract":"<div >Leading researchers debate the long-term influence of model-free methods that use large sets of demonstration data to train numerical generative models to control robots.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.aea7897","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144910563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2025-08-27DOI: 10.1126/scirobotics.adu2381
Ronald H. Heisser, Khoi D. Ly, Ofek Peretz, Young S. Kim, Carlos A. Diaz-Ruiz, Rachel M. Miller, Cameron A. Aubin, Sadaf Sobhani, Nikolaos Bouklas, Robert F. Shepherd
{"title":"Explosion-powered eversible tactile displays","authors":"Ronald H. Heisser, Khoi D. Ly, Ofek Peretz, Young S. Kim, Carlos A. Diaz-Ruiz, Rachel M. Miller, Cameron A. Aubin, Sadaf Sobhani, Nikolaos Bouklas, Robert F. Shepherd","doi":"10.1126/scirobotics.adu2381","DOIUrl":"10.1126/scirobotics.adu2381","url":null,"abstract":"<div >High-resolution electronic tactile displays stand to transform haptics for remote machine operation, virtual reality, and digital information access for people who are blind or visually impaired. Yet, increasing the resolution of these displays requires increasing the number of individually addressable actuators while simultaneously reducing their total surface area, power consumption, and weight, challenges most evidently reflected in the dearth of affordable multiline braille displays. Blending principles from soft robotics, microfluidics, and nonlinear mechanics, we introduce a 10-dot–by–10-dot array of 2-millimeter-diameter, combustion-powered, eversible soft actuators that individually rise in 0.24 milliseconds to repeatably produce display patterns. Our rubber architecture is hermetically sealed and demonstrates resistance to liquid and dirt ingress. We demonstrate complete actuation cycles in an untethered tactile display prototype. Our platform technology extends the capabilities of tactile displays to environments that are inaccessible to traditional actuation modalities.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144910525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2025-08-27DOI: 10.1126/scirobotics.adv3604
Junzhe He, Chong Zhang, Fabian Jenelten, Ruben Grandia, Moritz Bächer, Marco Hutter
{"title":"Attention-based map encoding for learning generalized legged locomotion","authors":"Junzhe He, Chong Zhang, Fabian Jenelten, Ruben Grandia, Moritz Bächer, Marco Hutter","doi":"10.1126/scirobotics.adv3604","DOIUrl":"10.1126/scirobotics.adv3604","url":null,"abstract":"<div >Dynamic locomotion of legged robots is a critical yet challenging topic in expanding the operational range of mobile robots. It requires precise planning when possible footholds are sparse, robustness against uncertainties and disturbances, and generalizability across diverse terrains. Although traditional model-based controllers excel at planning on complex terrains, they struggle with real-world uncertainties. Learning-based controllers offer robustness to such uncertainties but often lack precision on terrains with sparse steppable areas. Hybrid methods achieve enhanced robustness on sparse terrains by combining both methods but are computationally demanding and constrained by the inherent limitations of model-based planners. To achieve generalized legged locomotion on diverse terrains while preserving the robustness of learning-based controllers, this paper proposes an attention-based map encoding conditioned on robot proprioception, which is trained as part of the controller using reinforcement learning. We show that the network learns to focus on steppable areas for future footholds when the robot dynamically navigates diverse and challenging terrains. We synthesized behaviors that exhibited robustness against uncertainties while enabling precise and agile traversal of sparse terrains. In addition, our method offers a way to interpret the topographical perception of a neural network. We have trained two controllers for a 12-degrees-of-freedom quadrupedal robot and a 23-degrees-of-freedom humanoid robot and tested the resulting controllers in the real world under various challenging indoor and outdoor scenarios, including ones unseen during training.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144910554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Embodied intelligence paradigm for human-robot communication","authors":"Nana Obayashi, Arsen Abdulali, Fumiya Iida, Josie Hughes","doi":"10.1126/scirobotics.ads8528","DOIUrl":"10.1126/scirobotics.ads8528","url":null,"abstract":"<div >Animals leverage their full embodiment to achieve multimodal, redundant, and subtle communication. To achieve the same for robots, they must similarly exploit their brain-body-environment interactions or their embodied intelligence. To advance this approach, we propose a framework building on Shannon’s information channel theory for communication to provide the key principles and benchmarks for advancing human-robot communication.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2025-08-20DOI: 10.1126/scirobotics.adu6007
Yeji Lee, Vineeth K. Bandari, John S. McCaskill, Pranathi Adluri, Daniil Karnaushenko, Dmitriy D. Karnaushenko, Oliver G. Schmidt
{"title":"Si chiplet–controlled 3D modular microrobots with smart communication in natural aqueous environments","authors":"Yeji Lee, Vineeth K. Bandari, John S. McCaskill, Pranathi Adluri, Daniil Karnaushenko, Dmitriy D. Karnaushenko, Oliver G. Schmidt","doi":"10.1126/scirobotics.adu6007","DOIUrl":"10.1126/scirobotics.adu6007","url":null,"abstract":"<div >Modular microrobotics can potentially address many information-intensive microtasks in medicine, manufacturing, and the environment. However, surface area has limited the natural powering, communication, functional integration, and self-assembly of smart mass-fabricated modular robotic devices at small scales. We demonstrate the integrated self-folding and self-rolling of functionalized patterned interior and exterior membrane surfaces resulting in programmable, self-assembling, intercommunicating, and self-locomoting micromodules (smartlets ≤ 1 cubic millimeter) with interior chambers for onboard buoyancy control. The microrobotic divers, with 360° solar harvesting rolls, functioned with sufficient ambient power for communication and programmed locomotion in water via electrolysis. The interior folding faces carried rigid microcomponents, including silicon chiplets (Si chiplets) as microprocessors and micro–light-emitting diodes (LEDs) for communication. The exterior faces were able to engage in specific patterned docking interactions between smartlets. The heterogeneous integration is mass producible and affordable through two-dimensional (2D)–automated lithography and microchiplet bump-bonding processes, here shown to be compatible with subsequent autonomous 3D folding and rolling. The robotic modules functioned in natural aqueous environments, and the technology was analyzed as scalable down to microscopic dimensions. Selectively addressed communication with individual smartlets was enhanced via frequency-specific optical signals and enabled precise control, allowing each smartlet to be activated independently within a collective system. The work remodels modular microrobotics closer to the surface-rich modular autonomy of biological cells and provides an economical platform for microscopic applications.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2025-08-20DOI: 10.1126/scirobotics.ads6790
Jose A. Barreiros, Aykut Özgün Önol, Mengchao Zhang, Sam Creasey, Aimee Goncalves, Andrew Beaulieu, Aditya Bhat, Kate M. Tsui, Alex Alspach
{"title":"Learning contact-rich whole-body manipulation with example-guided reinforcement learning","authors":"Jose A. Barreiros, Aykut Özgün Önol, Mengchao Zhang, Sam Creasey, Aimee Goncalves, Andrew Beaulieu, Aditya Bhat, Kate M. Tsui, Alex Alspach","doi":"10.1126/scirobotics.ads6790","DOIUrl":"10.1126/scirobotics.ads6790","url":null,"abstract":"<div >Humans use diverse skills and strategies to effectively manipulate various objects, ranging from dexterous in-hand manipulation (fine motor skills) to complex whole-body manipulation (gross motor skills). The latter involves full-body engagement and extensive contact with various body parts beyond just the hands, where the compliance of our skin and muscles plays a crucial role in increasing contact stability and mitigating uncertainty. For robots, synthesizing these contact-rich behaviors has fundamental challenges because of the rapidly growing combinatorics inherent to this amount of contact, making explicit reasoning about all contact interactions intractable. We explore the use of example-guided reinforcement learning to generate robust whole-body skills for the manipulation of large and unwieldy objects. Our method’s effectiveness is demonstrated on Toyota Research Institute’s Punyo robot, a humanoid upper body with highly deformable, pressure-sensing skin. Training was conducted in simulation with only a single example motion per object manipulation task, and policies were easily transferred to hardware owing to domain randomization and the robot’s compliance. The resulting agent can manipulate various everyday objects, such as a water jug and large boxes, in a similar fashion to the example motion. In addition, we show blind dexterous whole-body manipulation, relying solely on proprioceptive and tactile feedback without object pose tracking. Our analysis highlights the critical role of compliance in facilitating whole-body manipulation with humanoid robots.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2025-08-20DOI: 10.1126/scirobotics.ads5033
Jianlan Luo, Charles Xu, Jeffrey Wu, Sergey Levine
{"title":"Precise and dexterous robotic manipulation via human-in-the-loop reinforcement learning","authors":"Jianlan Luo, Charles Xu, Jeffrey Wu, Sergey Levine","doi":"10.1126/scirobotics.ads5033","DOIUrl":"10.1126/scirobotics.ads5033","url":null,"abstract":"<div >Robotic manipulation remains one of the most difficult challenges in robotics, with approaches ranging from classical model-based control to modern imitation learning. Although these methods have enabled substantial progress, they often require extensive manual design, struggle with performance, and demand large-scale data collection. These limitations hinder their real-world deployment at scale, where reliability, speed, and robustness are essential. Reinforcement learning (RL) offers a powerful alternative by enabling robots to autonomously acquire complex manipulation skills through interaction. However, realizing the full potential of RL in the real world remains challenging because of issues of sample efficiency and safety. We present a human-in-the-loop, vision-based RL system that achieved strong performance on a wide range of dexterous manipulation tasks, including precise assembly, dynamic manipulation, and dual-arm coordination. These tasks reflect realistic industrial tolerances, with small but critical variations in initial object placements that demand sophisticated reactive control. Our method integrates demonstrations, human corrections, sample-efficient RL algorithms, and system-level design to directly learn RL policies in the real world. Within 1 to 2.5 hours of real-world training, our approach outperformed other baselines by improving task success by 2×, achieving near-perfect success rates, and executing 1.8× faster on average. Through extensive experiments and analysis, our results suggest that RL can learn a wide range of complex vision-based manipulation policies directly in the real world within practical training times. We hope that this work will inspire a new generation of learned robotic manipulation techniques, benefiting both industrial applications and research advancements.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}