Science RoboticsPub Date : 2024-05-29DOI: 10.1126/scirobotics.adl0085
Patrick G. Sagastegui Alva, Anna Boesendorfer, Oskar C. Aszmann, Jaime Ibáñez, Dario Farina
{"title":"Excitation of natural spinal reflex loops in the sensory-motor control of hand prostheses","authors":"Patrick G. Sagastegui Alva, Anna Boesendorfer, Oskar C. Aszmann, Jaime Ibáñez, Dario Farina","doi":"10.1126/scirobotics.adl0085","DOIUrl":"10.1126/scirobotics.adl0085","url":null,"abstract":"<div >Sensory feedback for prosthesis control is typically based on encoding sensory information in specific types of sensory stimuli that the users interpret to adjust the control of the prosthesis. However, in physiological conditions, the afferent feedback received from peripheral nerves is not only processed consciously but also modulates spinal reflex loops that contribute to the neural information driving muscles. Spinal pathways are relevant for sensory-motor integration, but they are commonly not leveraged for prosthesis control. We propose an approach to improve sensory-motor integration for prosthesis control based on modulating the excitability of spinal circuits through the vibration of tendons in a closed loop with muscle activity. We measured muscle signals in healthy participants and amputees during different motor tasks, and we closed the loop by applying vibration on tendons connected to the muscles, which modulated the excitability of motor neurons. The control signals to the prosthesis were thus the combination of voluntary control and additional spinal reflex inputs induced by tendon vibration. Results showed that closed-loop tendon vibration was able to modulate the neural drive to the muscles. When closed-loop tendon vibration was used, participants could achieve similar or better control performance in interfaces using muscle activation than without stimulation. Stimulation could even improve prosthetic grasping in amputees. Overall, our results indicate that closed-loop tendon vibration can integrate spinal reflex pathways in the myocontrol system and open the possibility of incorporating natural feedback loops in prosthesis control.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 90","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141176870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Microsaccade-inspired event camera for robotics","authors":"Botao He, Ze Wang, Yuan Zhou, Jingxi Chen, Chahat Deep Singh, Haojia Li, Yuman Gao, Shaojie Shen, Kaiwei Wang, Yanjun Cao, Chao Xu, Yiannis Aloimonos, Fei Gao, Cornelia Fermüller","doi":"10.1126/scirobotics.adj8124","DOIUrl":"10.1126/scirobotics.adj8124","url":null,"abstract":"<div >Neuromorphic vision sensors or event cameras have made the visual perception of extremely low reaction time possible, opening new avenues for high-dynamic robotics applications. These event cameras’ output is dependent on both motion and texture. However, the event camera fails to capture object edges that are parallel to the camera motion. This is a problem intrinsic to the sensor and therefore challenging to solve algorithmically. Human vision deals with perceptual fading using the active mechanism of small involuntary eye movements, the most prominent ones called microsaccades. By moving the eyes constantly and slightly during fixation, microsaccades can substantially maintain texture stability and persistence. Inspired by microsaccades, we designed an event-based perception system capable of simultaneously maintaining low reaction time and stable texture. In this design, a rotating wedge prism was mounted in front of the aperture of an event camera to redirect light and trigger events. The geometrical optics of the rotating wedge prism allows for algorithmic compensation of the additional rotational motion, resulting in a stable texture appearance and high informational output independent of external motion. The hardware device and software solution are integrated into a system, which we call artificial microsaccade–enhanced event camera (AMI-EV). Benchmark comparisons validated the superior data quality of AMI-EV recordings in scenarios where both standard cameras and event cameras fail to deliver. Various real-world experiments demonstrated the potential of the system to facilitate robotics perception both for low-level and high-level vision tasks.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 90","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141176872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Closed-loop optogenetic neuromodulation enables high-fidelity fatigue-resistant muscle control","authors":"Guillermo Herrera-Arcos, Hyungeun Song, Seong Ho Yeon, Omkar Ghenand, Samantha Gutierrez-Arango, Sapna Sinha, Hugh Herr","doi":"10.1126/scirobotics.adi8995","DOIUrl":"10.1126/scirobotics.adi8995","url":null,"abstract":"<div >Closed-loop neuroprostheses show promise in restoring motion in individuals with neurological conditions. However, conventional activation strategies based on functional electrical stimulation (FES) fail to accurately modulate muscle force and exhibit rapid fatigue because of their unphysiological recruitment mechanism. Here, we present a closed-loop control framework that leverages physiological force modulation under functional optogenetic stimulation (FOS) to enable high-fidelity muscle control for extended periods of time (>60 minutes) in vivo. We first uncovered the force modulation characteristic of FOS, showing more physiological recruitment and significantly higher modulation ranges (>320%) compared with FES. Second, we developed a neuromuscular model that accurately describes the highly nonlinear dynamics of optogenetically stimulated muscle. Third, on the basis of the optogenetic model, we demonstrated real-time control of muscle force with improved performance and fatigue resistance compared with FES. This work lays the foundation for fatigue-resistant neuroprostheses and optogenetically controlled biohybrid robots with high-fidelity force modulation.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 90","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.adi8995","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141080376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2024-05-22DOI: 10.1126/scirobotics.adj8812
Letizia Gionfrida, Daekyum Kim, Davide Scaramuzza, Dario Farina, Robert D. Howe
{"title":"Wearable robots for the real world need vision","authors":"Letizia Gionfrida, Daekyum Kim, Davide Scaramuzza, Dario Farina, Robert D. Howe","doi":"10.1126/scirobotics.adj8812","DOIUrl":"10.1126/scirobotics.adj8812","url":null,"abstract":"<div >To enhance wearable robots, understanding user intent and environmental perception with novel vision approaches is needed.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 90","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141080484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2024-05-15DOI: 10.1126/scirobotics.adi0591
F. Paredes-Vallés, J. J. Hagenaars, J. Dupeyroux, S. Stroobants, Y. Xu, G. C. H. E. de Croon
{"title":"Fully neuromorphic vision and control for autonomous drone flight","authors":"F. Paredes-Vallés, J. J. Hagenaars, J. Dupeyroux, S. Stroobants, Y. Xu, G. C. H. E. de Croon","doi":"10.1126/scirobotics.adi0591","DOIUrl":"10.1126/scirobotics.adi0591","url":null,"abstract":"<div >Biological sensing and processing is asynchronous and sparse, leading to low-latency and energy-efficient perception and action. In robotics, neuromorphic hardware for event-based vision and spiking neural networks promises to exhibit similar characteristics. However, robotic implementations have been limited to basic tasks with low-dimensional sensory inputs and motor actions because of the restricted network size in current embedded neuromorphic processors and the difficulties of training spiking neural networks. Here, we present a fully neuromorphic vision-to-control pipeline for controlling a flying drone. Specifically, we trained a spiking neural network that accepts raw event-based camera data and outputs low-level control actions for performing autonomous vision-based flight. The vision part of the network, consisting of five layers and 28,800 neurons, maps incoming raw events to ego-motion estimates and was trained with self-supervised learning on real event data. The control part consists of a single decoding layer and was learned with an evolutionary algorithm in a drone simulator. Robotic experiments show a successful sim-to-real transfer of the fully learned neuromorphic pipeline. The drone could accurately control its ego-motion, allowing for hovering, landing, and maneuvering sideways—even while yawing at the same time. The neuromorphic pipeline runs on board on Intel’s Loihi neuromorphic processor with an execution frequency of 200 hertz, consuming 0.94 watt of idle power and a mere additional 7 to 12 milliwatts when running the network. These results illustrate the potential of neuromorphic sensing and processing for enabling insect-sized intelligent robots.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 90","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2024-05-15DOI: 10.1126/scirobotics.adl3606
Byungjoon Bae, Doeon Lee, Minseong Park, Yujia Mu, Yongmin Baek, Inbo Sim, Cong Shen, Kyusang Lee
{"title":"Stereoscopic artificial compound eyes for spatiotemporal perception in three-dimensional space","authors":"Byungjoon Bae, Doeon Lee, Minseong Park, Yujia Mu, Yongmin Baek, Inbo Sim, Cong Shen, Kyusang Lee","doi":"10.1126/scirobotics.adl3606","DOIUrl":"10.1126/scirobotics.adl3606","url":null,"abstract":"<div >Arthropods’ eyes are effective biological vision systems for object tracking and wide field of view because of their structural uniqueness; however, unlike mammalian eyes, they can hardly acquire the depth information of a static object because of their monocular cues. Therefore, most arthropods rely on motion parallax to track the object in three-dimensional (3D) space. Uniquely, the praying mantis (Mantodea) uses both compound structured eyes and a form of stereopsis and is capable of achieving object recognition in 3D space. Here, by mimicking the vision system of the praying mantis using stereoscopically coupled artificial compound eyes, we demonstrated spatiotemporal object sensing and tracking in 3D space with a wide field of view. Furthermore, to achieve a fast response with minimal latency, data storage/transportation, and power consumption, we processed the visual information at the edge of the system using a synaptic device and a federated split learning algorithm. The designed and fabricated stereoscopic artificial compound eye provides energy-efficient and accurate spatiotemporal object sensing and optical flow tracking. It exhibits a root mean square error of 0.3 centimeter, consuming only approximately 4 millijoules for sensing and tracking. These results are more than 400 times lower than conventional complementary metal-oxide semiconductor–based imaging systems. Our biomimetic imager shows the potential of integrating nature’s unique design using hardware and software codesigned technology toward capabilities of edge computing and sensing.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 90","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2024-05-15DOI: 10.1126/scirobotics.adi8666
Yu Zhou, Zhibo Sun, Yucheng Ding, Zhengnan Yuan, Xiao Qiu, Yang Bryan Cao, Zhu’an Wan, Zhenghao Long, Swapnadeep Poddar, Shivam Kumar, Wenhao Ye, Chak Lam Jonathan Chan, Daquan Zhang, Beitao Ren, Qianpeng Zhang, Hoi-Sing Kwok, Mitch Guijun Li, Zhiyong Fan
{"title":"An ultrawide field-of-view pinhole compound eye using hemispherical nanowire array for robot vision","authors":"Yu Zhou, Zhibo Sun, Yucheng Ding, Zhengnan Yuan, Xiao Qiu, Yang Bryan Cao, Zhu’an Wan, Zhenghao Long, Swapnadeep Poddar, Shivam Kumar, Wenhao Ye, Chak Lam Jonathan Chan, Daquan Zhang, Beitao Ren, Qianpeng Zhang, Hoi-Sing Kwok, Mitch Guijun Li, Zhiyong Fan","doi":"10.1126/scirobotics.adi8666","DOIUrl":"10.1126/scirobotics.adi8666","url":null,"abstract":"<div >Garnering inspiration from biological compound eyes, artificial vision systems boasting a vivid range of diverse visual functional traits have come to the fore recently. However, most of these artificial systems rely on transformable electronics, which suffer from the complexity and constrained geometry of global deformation, as well as potential mismatches between optical and detector units. Here, we present a unique pinhole compound eye that combines a three-dimensionally printed honeycomb optical structure with a hemispherical, all-solid-state, high-density perovskite nanowire photodetector array. The lens-free pinhole structure can be designed and fabricated with an arbitrary layout to match the underlying image sensor. Optical simulations and imaging results matched well with each other and substantiated the key characteristics and capabilities of our system, which include an ultrawide field of view, accurate target positioning, and motion tracking function. We further demonstrate the potential of our unique compound eye for advanced robotic vision by successfully completing a moving target tracking mission.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 90","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.adi8666","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2024-05-15DOI: 10.1126/scirobotics.adp8107
Robin R. Murphy
{"title":"What to give your robot mother on Mother’s Day","authors":"Robin R. Murphy","doi":"10.1126/scirobotics.adp8107","DOIUrl":"10.1126/scirobotics.adp8107","url":null,"abstract":"<div ><i>Jung_E</i>, the 2023 science-fiction movie from South Korea, suggests that a novel leg-wheel hybrid for robot locomotion might be appreciated.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"9 90","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Science RoboticsPub Date : 2024-04-24DOI: 10.1126/scirobotics.adi9641
Joonho Lee, Marko Bjelonic, Alexander Reske, Lorenz Wellhausen, Takahiro Miki, Marco Hutter
{"title":"Learning robust autonomous navigation and locomotion for wheeled-legged robots","authors":"Joonho Lee, Marko Bjelonic, Alexander Reske, Lorenz Wellhausen, Takahiro Miki, Marco Hutter","doi":"10.1126/scirobotics.adi9641","DOIUrl":"https://doi.org/10.1126/scirobotics.adi9641","url":null,"abstract":"Autonomous wheeled-legged robots have the potential to transform logistics systems, improving operational efficiency and adaptability in urban environments. Navigating urban environments, however, poses unique challenges for robots, necessitating innovative solutions for locomotion and navigation. These challenges include the need for adaptive locomotion across varied terrains and the ability to navigate efficiently around complex dynamic obstacles. This work introduces a fully integrated system comprising adaptive locomotion control, mobility-aware local navigation planning, and large-scale path planning within the city. Using model-free reinforcement learning (RL) techniques and privileged learning, we developed a versatile locomotion controller. This controller achieves efficient and robust locomotion over various rough terrains, facilitated by smooth transitions between walking and driving modes. It is tightly integrated with a learned navigation controller through a hierarchical RL framework, enabling effective navigation through challenging terrain and various obstacles at high speed. Our controllers are integrated into a large-scale urban navigation system and validated by autonomous, kilometer-scale navigation missions conducted in Zurich, Switzerland, and Seville, Spain. These missions demonstrate the system’s robustness and adaptability, underscoring the importance of integrated control systems in achieving seamless navigation in complex environments. Our findings support the feasibility of wheeled-legged robots and hierarchical RL for autonomous navigation, with implications for last-mile delivery and beyond.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"292 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140642540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}