{"title":"High-Efficiency Vector Field by Time-Optimal Spatial Iterative Learning","authors":"Shuli Lv;Yan Gao;Quan Quan","doi":"10.1109/TRO.2025.3610174","DOIUrl":"10.1109/TRO.2025.3610174","url":null,"abstract":"This article presents a novel model-free spatial iterative learning (IL) framework to enhance the efficiency of vector field (VF) navigation for mobile robots. By integrating the idea of iterative learning control (ILC) control with VF, this framework utilizes historical data to enhance navigation efficiency significantly, reducing traversal time and expanding the applicability of IL to rapid navigation. Importantly, it has low-time complexity with <inline-formula><tex-math>$O(n)$</tex-math></inline-formula> per iteration, where <inline-formula><tex-math>$n$</tex-math></inline-formula> denotes the waypoints number, preventing the significant computational overhead caused by the increasing waypoints in existing methods, which often exceeds <inline-formula><tex-math>$O(n^{2})$</tex-math></inline-formula>, making it well-suited for real-time planning. Moreover, the approach is inherently model-free, leaning on historical data, thus enabling agile navigation with limited reliance on intricate model details. This article presents a comprehensive theoretical analysis of the stability, time optimality, time complexity, parameter insensitivity, robustness, and usage. Extensive simulations and experiments highlight its efficiency, promising a transformative impact on mobile robot navigation through the proposed IL.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5624-5644"},"PeriodicalIF":10.5,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hui Zhao;Fuqiang Gu;Jianga Shang;Xianlei Long;Jiarui Dou;Chao Chen;Huayan Pu;Jun Luo
{"title":"Toward Accurate, Efficient, and Robust RGB-D Simultaneous Localization and Mapping in Challenging Environments","authors":"Hui Zhao;Fuqiang Gu;Jianga Shang;Xianlei Long;Jiarui Dou;Chao Chen;Huayan Pu;Jun Luo","doi":"10.1109/TRO.2025.3610173","DOIUrl":"10.1109/TRO.2025.3610173","url":null,"abstract":"Visual simultaneous localization and mapping (SLAM) is crucial to many applications such as self-driving vehicles and robot tasks. However, it is still challenging for existing visual SLAM approaches to achieve good performance in low-texture or illumination-changing scenes. In recent years, some researchers have turned to edge-based SLAM approaches to deal with the challenging scenes, which are more robust than feature-based and direct SLAM methods. Nevertheless, existing edge-based methods are computationally expensive and inferior than other visual SLAM systems in terms of accuracy. In this study, we propose EdgeSLAM, a novel RGB-D edge-based SLAM approach to deal with challenging scenarios that is efficient, accurate, and robust. EdgeSLAM is built on two innovative modules: efficient edge selection and adaptive robust motion estimation. The edge selection module can efficiently select a small set of edge pixels, which significantly improves the computational efficiency without sacrificing the accuracy. The motion estimation module improves the system’s accuracy and robustness by adaptively handling outliers in motion estimation. Extensive experiments were conducted on technical university of munich (TUM) RGBD, imperial college london (ICL)-National University of Ireland Maynooth (NUIM), and ETH zurich 3D reconstruction (ETH3D) datasets, and experimental results show that EdgeSLAM significantly outperforms five state-of-the-art methods in terms of efficiency, accuracy, and robustness, which achieves 29.17% accuracy improvements with a high processing speed of up to 120 frames/s and a high positioning success rate of 97.06%.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5720-5739"},"PeriodicalIF":10.5,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predictive Body Awareness in Soft Robots: A Bayesian Variational Autoencoder Fusing Multimodal Sensory Data","authors":"Shuyu Wang;Dongling Liu;Changzeng Fu;Xiaoming Yuan;Peng Shan;Victor C.M. Leung","doi":"10.1109/TRO.2025.3610170","DOIUrl":"10.1109/TRO.2025.3610170","url":null,"abstract":"Predicting the causal flow by fusing multimodal perception is fundamental for constructing the bodily awareness of soft robots. However, forming such a predictive model while fusing the multimodal sensory data of soft robots remains challenging and less explored. In this study, we leverage the free energy principle within a Bayesian probabilistic deep learning framework to merge visual, pressure, and flex sensing signals. Our proposed multimodal association mechanism enhances the fusion process, establishing a robust computational methodology. We train the model using a newly collected dataset that captures the grasping dynamics of a soft gripper equipped with multimodal perception capabilities. By incorporating the current state and image differences, the forward model can predict the soft gripper’s physical interaction and movement in the image flow, which amounts to imagining future motion events. Moreover, we showcase effective predictions across modalities as well as for grasping outcomes. Notably, our enhanced variational autoencoder approach can pave the way for unprecedented possibilities of bodily awareness in soft robotics.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5663-5678"},"PeriodicalIF":10.5,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Plan Optimal Collision-Free Trajectories With Nonconvex Cost Functions Using Graphs of Convex Sets","authors":"Charles L. Clark;Biyun Xie","doi":"10.1109/TRO.2025.3610175","DOIUrl":"10.1109/TRO.2025.3610175","url":null,"abstract":"The recently developed approach to motion planning in graphs of convex sets (GCS) provides an efficient framework for computing shortest-distance collision-free paths using convex optimization. This new motion planner is notably more computationally efficient than popular sampling-based motion planners, but it does not support nonconvex cost functions. This article develops a novel motion planning algorithm, graph of convex sets with general costs (GCSGC), to solve this problem. A given nonconvex cost function is accurately approximated by a multiple-layer ReLU neural network and the configuration space is decomposed into a set of linear-cost regions using the hidden layers of the neural network. These linear-cost regions are intersected with a set of collision-free regions, and the resulting collision-free linear-cost regions are intersected to form the vertices and edges of the motion planner’s underlying graph structure. The edge costs have a closed-form solution within each collision-free linear-cost region, but it is nonconvex, so the McCormick relaxation is applied to convexify the edge costs. Finally, a graph preprocessing technique is developed to compute a representative graph structure that acts as a heuristic for the edge costs of the underlying GCS and then simplify the underlying graph structure by removing cycles and high-cost paths, which can significantly improve the efficiency of the planner and quality of the produced trajectories. The proposed motion planner is first validated in a 2-D configuration space with comparisons between different sized neural networks with and without preprocessing, comparisons between optimal trajectories from GCSGC with shortest-distance trajectories, and comparisons between GCSGC and GCS-Sequential linear programming (SLP). The GCSGC planner is further validated in a complex 7-D configuration space by comparing to state-of-the-art multiquery (PRM*, GCS-SLP) and single-query (TrajOpt, BIT*, AIT*, RRT*) planners. The results show that the proposed motion planner is very competitive in terms of computational efficiency, trajectory cost, and memory footprint. Two physical experiments further validate the effectiveness of the proposed motion planner in real-world motion planning applications.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5604-5623"},"PeriodicalIF":10.5,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shan Luo;Nathan F. Lepora;Wenzhen Yuan;Kaspar Althoefer;Gordon Cheng;Ravinder Dahiya
{"title":"Tactile Robotics: An Outlook","authors":"Shan Luo;Nathan F. Lepora;Wenzhen Yuan;Kaspar Althoefer;Gordon Cheng;Ravinder Dahiya","doi":"10.1109/TRO.2025.3608686","DOIUrl":"10.1109/TRO.2025.3608686","url":null,"abstract":"Robotics research has long sought to give robots the ability to perceive the physical world through touch in an analogous manner to many biological systems. Developing such tactile capabilities is important for numerous emerging applications that require robots to co-exist and interact closely with humans. Consequently, there has been growing interest in tactile sensing, leading to the development of various technologies, including piezoresistive and piezoelectric sensors, capacitive sensors, magnetic sensors, and optical tactile sensors. These diverse approaches utilize different transduction methods and materials to equip robots with distributed sensing capabilities, enabling more effective physical interactions. These advances have been supported in recent years by simulation tools that generate large-scale tactile datasets to support sensor designs and algorithms to interpret and improve the utility of tactile data. The integration of tactile sensing with other modalities, such as vision, as well as with action strategies for active tactile perception highlights the growing scope of this field. To further the transformative progress in tactile robotics, a holistic approach is essential. In this outlook article, we examine several challenges associated with the current state of the art in tactile robotics and explore potential solutions to inspire innovations across multiple domains, including manufacturing, healthcare, recycling, and agriculture.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5564-5583"},"PeriodicalIF":10.5,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shamsa Al Harthy;S.M. Hadi Sadati;Cédric Girerd;Sukjun Kim;Alessio Mondini;Zicong Wu;Brandon Saldarriaga;Carlo A. Seneci;Barbara Mazzolai;Tania K. Morimoto;Christos Bergeles
{"title":"Tip-Growing Robots: Design, Theory, Application","authors":"Shamsa Al Harthy;S.M. Hadi Sadati;Cédric Girerd;Sukjun Kim;Alessio Mondini;Zicong Wu;Brandon Saldarriaga;Carlo A. Seneci;Barbara Mazzolai;Tania K. Morimoto;Christos Bergeles","doi":"10.1109/TRO.2025.3608701","DOIUrl":"10.1109/TRO.2025.3608701","url":null,"abstract":"Growing robots apically extend through material eversion or deposition at their tip. This endows them with unique capabilities, such as follow the leader navigation, long-reach, inherent compliance, and large force delivery bandwidth. Tip-growing robots can therefore conform to sensitive, intricate, and difficult-to-access environments. This review article categorizes, compares, and critically evaluates state-of-the-art growing robots with emphasis on their designs, fabrication processes, actuation and steering mechanisms, mechanics models, controllers, and applications. Finally, this article discusses the main challenges that the research area still faces and proposes future directions.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5511-5532"},"PeriodicalIF":10.5,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiahao Liang;Yuanzhe Wang;Guohao Peng;Zhenyu Wu;Danwei Wang
{"title":"Curb-Tracker: An Integrated Curb Following System for Autonomous Vehicles","authors":"Jiahao Liang;Yuanzhe Wang;Guohao Peng;Zhenyu Wu;Danwei Wang","doi":"10.1109/TRO.2025.3608695","DOIUrl":"10.1109/TRO.2025.3608695","url":null,"abstract":"Curb following is a critical technology for autonomous road sweeping vehicles. However, existing solutions face two primary challenges: 1) <italic>unreliable curb detection</i>; and 2) <italic>inefficient motion generation</i>. <italic>Unreliable curb detection</i> stems from the wide variability in curb dimensions and types, as well as interference from roadside features, such as vegetation and infrastructure. <italic>Inefficient motion generation</i> occurs when existing methods prioritize tracking accuracy while neglecting task completion efficiency, leading to prolonged operation times. To address these challenges, we propose Curb-Tracker, an integrated curb-following system designed for autonomous vehicles operating in diverse road environments. First, we develop a robust and adaptive curb detection algorithm that leverages a 2.5-D elevation map of the local environment and dynamically adjusts key parameters online to ensure reliable detection across varying scenarios. Second, to achieve accurate and efficient curb-aligned motion generation, we leverage model predictive contouring control as a tailored framework specifically designed for the curb-following task to generate an optimal control sequence for the vehicle to maintain a specified lateral offset from the curb while maximizing travel progress along it. The proposed system has been implemented on a Hunter 2.0, a front-wheel Ackerman-steering mobile robot, and has been validated through extensive experiments in both Gazebo simulation and real-world environments. Experimental results demonstrate the effectiveness, adaptability, and robustness of the proposed system across a wide range of road scenarios.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5491-5510"},"PeriodicalIF":10.5,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Cavorsi;Frederik Mallmann-Trenn;David Saldaña;Stephanie Gil
{"title":"quasi-Dynamic Crowd Vetting: Collaborative Detection of Malicious Robots in Dynamic Communication Networks","authors":"Matthew Cavorsi;Frederik Mallmann-Trenn;David Saldaña;Stephanie Gil","doi":"10.1109/TRO.2025.3608702","DOIUrl":"10.1109/TRO.2025.3608702","url":null,"abstract":"We are interested in the problem where robots traverse through an environment modeled by a graph of discrete sites, and an unknown subset of the multirobot team is malicious. Previous works require that each robot gathers information about the trustworthiness of all other robots, called <italic>trust observations</i>, which can be time consuming in large networks. This article decreases the time required to estimate trustworthiness by building upon an algorithm that leverages the concept of “crowd vetting” and the opinion of trusted neighbors. This allows each robot to estimate trust in dynamic scenarios, where the team size, robot neighborhoods, and robot legitimacy can change. In particular, we employ an assumption that there exists <italic>quasi-dynamic</i> time periods, where if a robot’s legitimacy remains fixed for a sufficient length of time, its trustworthiness can be characterized. In this setting, we develop a closed-form expression for the critical number of time-steps required for our algorithm to successfully identify the true legitimacy of each robot within a specified failure probability. We show that the number of time-steps required for robots to correctly estimate the trust of all other robots increases logarithmically with the number of robots when robots do not leverage neighboring opinions, called the <italic>direct protocol</i>. Conversely, for most general graph topologies, the number of time-steps required remains constant as the number of robots increases when our proposed algorithm, called <italic>quasi-dynamic crowd vetting</i> (DCV), is used, for a fixed ratio of legitimate to malicious robots. Finally, our theoretical results are successfully validated through simulated persistent surveillance tasks where robots maintain a desired distribution of robots over sites in the environment.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5533-5549"},"PeriodicalIF":10.5,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integration of Robot and Scene Kinematics for Sequential Mobile Manipulation Planning","authors":"Ziyuan Jiao;Yida Niu;Zeyu Zhang;YangYang Wu;Yao Su;Yixin Zhu;Hangxin Liu;Song-Chun Zhu","doi":"10.1109/TRO.2025.3605261","DOIUrl":"10.1109/TRO.2025.3605261","url":null,"abstract":"We present a sequential mobile manipulation planning framework that can solve long-horizon multistep mobile manipulation tasks with coordinated whole-body motion, even when interacting with articulated objects. By abstracting environmental structures as kinematic models and integrating them with the robot’s kinematics, we construct an augmented configuration space (A-Space) that unifies the previously separate task constraints for navigation and manipulation, while accounting for the joint reachability of the robot base, arm, and manipulated objects. This integration facilitates efficient planning within a tri-level framework: a task planner generates symbolic action sequences to model the evolution of A-Space, an optimization-based motion planner computes continuous trajectories within A-Space to achieve desired configurations for both the robot and scene elements, and an intermediate plan refinement stage selects action goals that ensure long-horizon feasibility. Our simulation studies first confirm that planning in A-Space achieves an 84.6% higher task success rate compared to baseline methods. Validation on real robotic systems demonstrates fluid mobile manipulation involving first, seven types of rigid and articulated objects across 17 distinct contexts, and second, long-horizon tasks of up to 14 sequential steps. Our results highlight the significance of modeling scene kinematics into planning entities, rather than encoding task-specific constraints, offering a scalable and generalizable approach to complex robotic manipulation.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5679-5699"},"PeriodicalIF":10.5,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144930758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}