Abdulaziz Y. Alkayas;Anup Teejo Mathew;Daniel Feliu-Talegon;Yahya Zweiri;Thomas George Thuruthel;Federico Renda
{"title":"Structure-Preserving Model Order Reduction of Slender Soft Robots via Autoencoder-Parameterized Strain","authors":"Abdulaziz Y. Alkayas;Anup Teejo Mathew;Daniel Feliu-Talegon;Yahya Zweiri;Thomas George Thuruthel;Federico Renda","doi":"10.1109/LRA.2025.3606389","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606389","url":null,"abstract":"While soft robots offer advantages in adaptability and safe interaction, their modeling remains challenging. This letter presents a novel, data-driven approach for model order reduction of slender soft robots using autoencoder-parameterized strain within the Geometric Variable Strain (GVS) framework. We employ autoencoders (AEs) to learn low-dimensional strain parameterizations from data to construct reduced-order models (ROMs), preserving the Lagrangian structure of the system while significantly reducing the degrees of freedom. Our comparative analysis demonstrates that AE-based ROMs consistently outperform proper orthogonal decomposition (POD) approaches, achieving lower errors for equivalent degrees of freedom across multiple test cases. Additionally, we demonstrate that our proposed approach achieves computational speed-ups over the high-order models (HOMs) in all cases, and outperforms the POD-based ROM in scenarios where accuracy is matched. We highlight the intrinsic dimensionality discovery capabilities of autoencoders, revealing that HOM often operate in lower-dimensional nonlinear manifolds. Through both simulation and experimental validation on a cable-actuated soft manipulator, we demonstrate the effectiveness of our approach, achieving near-identical behavior with just a single degree of freedom. This structure-preserving method offers significant reductions in the system degrees of freedom and computational effort while maintaining physical model interpretability, offering a promising direction for soft robot modeling and control.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"11006-11013"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11150703","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Previous Knowledge Utilization in Online Anytime Belief Space Planning","authors":"Michael Novitsky;Moran Barenboim;Vadim Indelman","doi":"10.1109/LRA.2025.3606381","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606381","url":null,"abstract":"Online planning under uncertainty remains a critical challenge in robotics and autonomous systems. While tree search techniques are commonly employed to construct partial future trajectories within computational constraints, most existing methods discard information from previous planning sessions considering continuous spaces. This study presents a novel, computationally efficient approach that leverages historical planning data in current decision-making processes. We provide theoretical foundations for our information reuse strategy and introduce an algorithm based on Monte Carlo Tree Search (MCTS) that implements this approach. Experimental results demonstrate that our method significantly reduces computation time while maintaining high performance levels. Our findings suggest that integrating historical planning information can substantially improve the efficiency of online decision-making in uncertain environments, paving the way for more responsive and adaptive autonomous systems.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10950-10957"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Blockchain Framework for Equitable and Secure Task Allocation in Robot Swarms","authors":"Hanqing Zhao;Alexandre Pacheco;Giovanni Beltrame;Xue Liu;Marco Dorigo;Gregory Dudek","doi":"10.1109/LRA.2025.3606349","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606349","url":null,"abstract":"Recent studies demonstrate the potential of blockchain to enable robots in a swarm to achieve secure consensus about the environment, particularly when robots are homogeneous and perform identical tasks. Typically, robots receive rewards for their contributions to consensus achievement, but no studies have yet targeted heterogeneous swarms, in which the robots have distinct physical capabilities suited to different tasks. We present a novel framework that leverages domain knowledge to decompose the swarm mission into a hierarchy of tasks within smart contracts. This allows the robots to reach a consensus about both the environment and the action plan, allocating tasks among robots with diverse capabilities to improve their performance while maintaining security against faults and malicious behaviors. We refer to this concept as <italic>equitable and secure</i> task allocation. Validated in Simultaneous Localization and Mapping missions, our approach not only achieves equitable task allocation among robots with varying capabilities, improving mapping accuracy and efficiency, but also shows resilience against malicious attacks.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10862-10869"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roland Ilyes;Lara Brudermüller;Nick Hawes;Bruno Lacerda
{"title":"Receding Horizon Control for Signal Temporal Logic Using Robustness-Conserving Partial Formula Evaluation","authors":"Roland Ilyes;Lara Brudermüller;Nick Hawes;Bruno Lacerda","doi":"10.1109/LRA.2025.3606350","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606350","url":null,"abstract":"We present a bounded-memory receding horizon approach to robot control for complex specifications in dynamic environments. We use Signal Temporal Logic, a logic that quantifies how robustly trajectories satisfy the specification, to specify robot behavior. To handle unbounded specifications, we consider a short planning horizon, only searching for nonviolating trajectories. We identify the subset of Signal Temporal Logic for which this approach needs only a bounded memory of the past, and leverage syntactic separation to summarize the robust satisfaction of the trajectory as it evolves. We implement our approach using receding horizon control in dynamic environments. We demonstrate the effectiveness and scalability of our approach compared to the state-of-the-art approach in several case studies.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10775-10782"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11150694","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spin Swimmer : A Fast, Efficient and Agile Fish-Like Robot","authors":"Prashanth Chivkula;Phanindra Tallapragada","doi":"10.1109/LRA.2025.3606357","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606357","url":null,"abstract":"Engineers and scientists designing underwater robots have sought to emulate the speed, efficiency, and agility of fish. Much of the engineering of fish-like robotics reduces to the design of soft or articulated multi-body tails that can oscillate or undulate at frequencies and amplitudes similar to those of the fish they seek to mimic in the hope of achieving their efficiency and speed. Such kinematic approaches do not account for the dynamic interaction between power efficient actuation, response of flexible appendages and hydrodynamic forces. This letter presents a fundamentally novel means of mechanical actuation: a fast spinning unbalanced rotor internal to the body of the robot, that transfers a periodic axial force to an otherwise passive flexible tail. The net result is that the tail acts as a parametric oscillator that undergoes a <inline-formula><tex-math>$2:1$</tex-math></inline-formula> subharmonic resonance. High tail-beat frequencies are achieved with minimal input power due to this parametric resonance. The resulting robot has the lowest cost of transport amongst free swimming robots while also being fast, extremely agile and gyroscopically roll and pitch stable. The results demonstrate the importance of exploiting parametric resonances in designing efficient fish-like robots.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10942-10949"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unveiling $SO(3)$ Parallel Robot Variants: Application of the Optimal Robot to a Humanoid Eye","authors":"Hassen Nigatu;Jihao Li;Gaokun Shi;Jianguo Wang;Guodong Lu;Howard Li;Huixu Dong","doi":"10.1109/LRA.2025.3606380","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606380","url":null,"abstract":"This study presents a systematic motion analysis and classification of <inline-formula><tex-math>$SO(3)$</tex-math></inline-formula>-type parallel robot variants using an analytical Lie algebra approach. These robots are known for their ability to perform arbitrary rotations around a fixed point, making them suitable for various applications. Despite their architectural diversity, existing research has largely treated them on a case-by-case basis, limiting the exploration of all potential variants and the benefits derived from this diversity. By applying a generalized analytical approach through the reciprocal screw method, we systematically examine the kinematic conditions for limbs that generate <inline-formula><tex-math>$SO(3)$</tex-math></inline-formula> motion. As a result, we identify 73 distinct non-redundant limb types capable of producing the desired <inline-formula><tex-math>$SO(3)$</tex-math></inline-formula> motion. Our approach includes an in-depth algebraic motion-constraint analysis, uncovering common characteristics across different variants. This leads us to identify 73 symmetric and 5,256 asymmetric variants, for a total of 5,329, each with unique capabilities. Finally, we selected a computationally optimized, miniaturized robot from this set for use in a humanoid eye system.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"11227-11234"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Light Reflection-Guided RRT$^{*}$: Efficient Path Planning in Narrow Passages","authors":"Xiaotong Xun;Runda Zhang;Senchun Chai;Runqi Chai;Yuanqing Xia","doi":"10.1109/LRA.2025.3606385","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606385","url":null,"abstract":"In complex and constrained environments, robot path planning faces the dual challenges of efficiency and solution quality. This letter presents a Light Reflection Heuristic RRT<inline-formula><tex-math>$^{*}$</tex-math></inline-formula> algorithm (LRH-RRT<inline-formula><tex-math>$^{*}$</tex-math></inline-formula>), which generates the reference path by simulating light reflections along obstacle boundaries and adaptively adjusts the sampling distribution. A dynamic path pruning strategy is introduced to eliminate redundant nodes, and third-order Bézier curve interpolation is applied to smooth the path while satisfying the dynamic constraints of mobile robots. Experimental results demonstrate that LRH-RRT<inline-formula><tex-math>$^{*}$</tex-math></inline-formula> improves planning efficiency and path quality in various narrow passage scenarios.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"11474-11481"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145210183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VIRAA-SLAM: Flexible Robust Visual-Inertial-Range-AOA Tightly-Coupled Localization","authors":"Xingyu Ma;Ningyan Guo;Rui Xin;Zhigang Cen;Zhiyong Feng","doi":"10.1109/LRA.2025.3606384","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606384","url":null,"abstract":"In this letter, we propose a novel tightly-coupled fusion framework for robust and accurate long-term localization in fast-motion scenarios, integrating a monocular camera, a 6-DoF inertial measurement unit (IMU), and multiple position-unknown ultra-wideband (UWB) anchors. Unlike existing UWB fusion methods that rely on pre-calibrated anchors' positions, our approach leverages the relative UWB-derived angle and ranging measurements to constrain relative frame-to-frame relationships within a sliding window. These constraints are converted into priors through marginalization, significantly simplifying system complexity and the fusion process. Crucially, our method eliminates the need for the anchors' location estimations, supports an arbitrary number of anchors, and maintains robustness even under prolonged visual degradation. Experimental validation includes a challenging scenario where visual data is discarded between 15–60 seconds, demonstrating sustained operation without vision. Accuracy evaluations confirm that our method achieves superior performance compared to VINS-Mono, highlighting its precision and resilience in dynamic environments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10658-10665"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Reusability of Learned Skills for Robot Manipulation via Gaze Information and Motion Bottlenecks","authors":"Ryo Takizawa;Izumi Karino;Koki Nakagawa;Yoshiyuki Ohmura;Yasuo Kuniyoshi","doi":"10.1109/LRA.2025.3606390","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606390","url":null,"abstract":"Autonomous agents capable of diverse object manipulations should be able to acquire a wide range of manipulation skills with high reusability. Although advances in deep learning have made it increasingly feasible to replicate the dexterity of human teleoperation in robots, generalizing these acquired skills to previously unseen scenarios remains a significant challenge. In this study, we propose a novel algorithm, Gaze-based Bottleneck-aware Robot Manipulation (GazeBot), which enables high reusability of learned motions without sacrificing dexterity or reactivity. By leveraging gaze information and motion bottlenecks—both crucial features for object manipulation—GazeBot achieves high success rates compared with state-of-the-art imitation learning methods, particularly when the object positions and end-effector poses differ from those in the provided demonstrations. Furthermore, the training process of GazeBot is entirely data-driven once a demonstration dataset with gaze data is provided.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10737-10744"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Push-Grasp Policy Learning Using Equivariant Models and Grasp Score Optimization","authors":"Boce Hu;Heng Tian;Dian Wang;Haojie Huang;Xupeng Zhu;Robin Walters;Robert Platt","doi":"10.1109/LRA.2025.3606392","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606392","url":null,"abstract":"Goal-conditioned robotic grasping in cluttered environments remains a challenging problem due to occlusions caused by surrounding objects, which prevent direct access to the target object. A promising solution to mitigate this issue is combining pushing and grasping policies, enabling active rearrangement of the scene to facilitate target retrieval. However, existing methods often overlook the rich geometric structures inherent in such tasks, thus limiting their effectiveness in complex, heavily cluttered scenarios. To address this, we propose the Equivariant Push-Grasp Network, a novel framework for joint pushing and grasping policy learning. Our contributions are twofold: (1) leveraging <inline-formula><tex-math>$text{SE}(2)$</tex-math></inline-formula>-equivariance to improve both pushing and grasping performance and (2) a grasp score optimization-based training strategy that simplifies the joint learning process. Experimental results show that our method improves grasp success rates by 45% in simulation and by 35% in real-world scenarios compared to strong baselines, representing a significant advancement in push-grasp policy learning.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"11180-11187"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}