Zhengyuan Xin;Shihao Zhong;Anping Wu;Zhiqiang Zheng;Qing Shi;Qiang Huang;Toshio Fukuda;Huaping Wang
{"title":"Dynamic Control of Multimodal Motion for Bistable Soft Millirobots in Complex Environments","authors":"Zhengyuan Xin;Shihao Zhong;Anping Wu;Zhiqiang Zheng;Qing Shi;Qiang Huang;Toshio Fukuda;Huaping Wang","doi":"10.1109/TRO.2025.3551541","DOIUrl":"10.1109/TRO.2025.3551541","url":null,"abstract":"Soft millirobots are highly promising for biomedical applications due to their reconfigurability and multifunctionality within physiological environments. However, the diverse and narrow biological cavity environments pose significant adaptability challenges for these millirobots. Here, we present a dual-morphology, thin-film millirobot equipped with a magnetic drive head and a functional tail to facilitate multimodal motion and targeted cell delivery. The millirobot can reversibly switch between two distinct morphologies in response to environmental stimuli through the deformation of its hydrogel body. Utilizing these dual morphologies, the millirobot can perform robust multimodal fundamental motions controlled by magnetic fields. We encapsulate fundamental motions with specific programmable magnetic field parameters into motion primitives, allowing easy invocation and adjustment of motion modes on demand. A knowledge graph is established to map terrain features to motion units, enabling the identification of optimal motion modes based on typical terrain characteristics. Experimental results indicate that the millirobot can effectively switch its morphology and movement modes to navigate various terrains, including narrow and curved channels as small as 1 mm, 0.8 mm high stairs with a 15° incline, and even the complex environment of a swine intestinal lumen. Its functional tail can carry immune cells to target and kill cancer cells. This robot can transport drugs and cells while navigating complex terrains through multimodal motion, paving the way for targeted medical tasks in intricate human environments in the future.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2662-2676"},"PeriodicalIF":9.4,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Informative Path Planning for Active Regression With Gaussian Processes via Sparse Optimization","authors":"Shamak Dutta;Nils Wilde;Stephen L. Smith","doi":"10.1109/TRO.2025.3548865","DOIUrl":"10.1109/TRO.2025.3548865","url":null,"abstract":"We study informative path planning for active regression in Gaussian Processes (GP). Here, a resource constrained robot team collects measurements of an unknown function, assumed to be a sample from a GP, with the goal of minimizing the trace of the <inline-formula><tex-math>$M$</tex-math></inline-formula>-weighted expected squared estimation error covariance (where <inline-formula><tex-math>$M$</tex-math></inline-formula> is a positive semidefinite matrix) resulting from the GP posterior mean. While greedy heuristics are a popular solution in the case of length constrained paths, it remains a challenge to compute <italic>optimal</i> solutions in the discrete setting subject to routing constraints. We show that this challenge is surprisingly easy to circumvent. Using the optimality of the posterior mean for a class of functions of the squared loss yields an exact formulation as a mixed integer program. We demonstrate that this approach finds optimal solutions in a variety of settings in seconds and when terminated early, it finds sub-optimal solutions of higher quality than existing heuristics.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2184-2199"},"PeriodicalIF":9.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marta Čolaković-Bencerić;Juraj Peršić;Ivan Marković;Ivan Petrović
{"title":"Multiscale and Uncertainty-Aware Targetless Hand-Eye Calibration via the Gauss–Helmert Model","authors":"Marta Čolaković-Bencerić;Juraj Peršić;Ivan Marković;Ivan Petrović","doi":"10.1109/TRO.2025.3548538","DOIUrl":"10.1109/TRO.2025.3548538","url":null,"abstract":"The operational reliability of an autonomous robot depends crucially on extrinsic sensor calibration as a prerequisite for precise and accurate data fusion. Exploring the calibration of unscaled sensors (e.g., monocular cameras) and the effective utilization of uncertainties are difficult and often overlooked. The development of a solution for the simultaneous calibration of hand-eye sensors and scale estimation based on the Gauss–Helmert model aims to utilize the valuable information contained in the uncertainty of odometry. In this work, we propose a versatile and robust solution for batch calibration based on the analytical on-manifold approach for estimation. The versatility of our method is demonstrated by its ability to calibrate multiple unscaled and metric-scaled sensors while dealing with odometry failures and reinitializations. Importantly, all estimated parameters are provided with their corresponding uncertainties. The validation of our method and its comparison with five competing state-of-the-art calibration methods in both simulations and real-world experiments show its superior accuracy, with particularly promising results observed in high-noise scenarios.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2340-2357"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Physics-Informed Neural Mapping and Motion Planning in Unknown Environments","authors":"Yuchen Liu;Ruiqi Ni;Ahmed H. Qureshi","doi":"10.1109/TRO.2025.3548495","DOIUrl":"10.1109/TRO.2025.3548495","url":null,"abstract":"Mapping and motion planning are two essential elements of robot intelligence that are interdependent in generating environment maps and navigating around obstacles. The existing mapping methods create maps that require computationally expensive motion planning tools to find a path solution. In this article, we propose a new mapping feature called arrival time fields, which is a solution to the Eikonal equation. The arrival time fields can directly guide the robot in navigating the given environments. Therefore, this article introduces a new approach called active neural time fields, which is a physics-informed neural framework that actively explores the unknown environment and maps its arrival time field on the fly for robot motion planning. Our method does not require any expert data for learning and uses neural networks to directly solve the Eikonal equation for arrival time field mapping and motion planning. We benchmark our approach against state-of-the-art mapping and motion planning methods and demonstrate its superior performance in both simulated and real-world environments with a differential drive robot and a six-degree-of-freedom robot manipulator.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2200-2212"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Night-Voyager: Consistent and Efficient Nocturnal Vision-Aided State Estimation in Object Maps","authors":"Tianxiao Gao;Mingle Zhao;Chengzhong Xu;Hui Kong","doi":"10.1109/TRO.2025.3548540","DOIUrl":"10.1109/TRO.2025.3548540","url":null,"abstract":"Accurate and robust state estimation at nighttime is essential for autonomous robotic navigation to achieve nocturnal or round-the-clock tasks. An intuitive question arises: can low-cost standard cameras be exploited for nocturnal state estimation? Regrettably, most existing visual methods may fail under adverse illumination conditions, even with active lighting or image enhancement. A pivotal insight, however, is that streetlights in most urban scenarios act as stable and salient prior visual cues at night, reminiscent of stars in deep space aiding spacecraft voyage in interstellar navigation. Inspired by this, we propose Night-Voyager, an object-level nocturnal vision-aided state estimation framework that leverages prior object maps and keypoints for versatile localization. We also find that the primary limitation of conventional visual methods under poor lighting conditions stems from the reliance on pixel-level metrics. In contrast, metric-agnostic, nonpixel-level object detection serves as a bridge between pixel-level and object-level spaces, enabling effective propagation and utilization of object map information within the system. Night-Voyager begins with a fast initialization to solve the global localization problem. By employing an effective two-stage cross-modal data association, the system delivers globally consistent state updates using map-based observations. To address the challenge of significant uncertainties in visual observations at night, a novel matrix Lie group formulation and a feature-decoupled multistate invariant filter are introduced, ensuring consistent and efficient estimation. Through comprehensive experiments in both simulation and diverse real-world scenarios (spanning approximately 12.3 km), Night-Voyager showcases its efficacy, robustness, and efficiency, filling a critical gap in nocturnal vision-aided state estimation.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2105-2126"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Autonomous Flights Inside Narrow Tunnels","authors":"Luqi Wang;Yan Ning;Hongming Chen;Peize Liu;Yang Xu;Hao Xu;Ximin Lyu;Shaojie Shen","doi":"10.1109/TRO.2025.3548525","DOIUrl":"10.1109/TRO.2025.3548525","url":null,"abstract":"Multirotors are usually desired to enter confined narrow tunnels that are barely accessible to humans in various applications including inspection, search and rescue, and so on. This task is extremely challenging since the lack of geometric features and illuminations, together with the limited field of view, cause problems in perception; the restricted space and significant ego airflow disturbances induce control issues. This article introduces an autonomous aerial system designed for navigation through tunnels as narrow as 0.5 m in diameter. The real-time and online system includes a virtual omni-directional perception module tailored for the mission and a novel motion planner that incorporates perception and ego airflow disturbance factors modeled using camera projections and computational fluid dynamics analyses, respectively. Extensive flight experiments on a custom-designed quadrotor are conducted in multiple realistic narrow tunnels to validate the superior performance of the system, even over human pilots, proving its potential for real applications. In addition, a deployment pipeline on other multirotor platforms is outlined and open-source packages are provided for future developments.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2230-2250"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md Abir Hossen;Sonam Kharade;Jason M. O'Kane;Bradley Schmerl;David Garlan;Pooyan Jamshidi
{"title":"CURE: Simulation-Augmented Autotuning in Robotics","authors":"Md Abir Hossen;Sonam Kharade;Jason M. O'Kane;Bradley Schmerl;David Garlan;Pooyan Jamshidi","doi":"10.1109/TRO.2025.3548546","DOIUrl":"10.1109/TRO.2025.3548546","url":null,"abstract":"Robotic systems are typically composed of various subsystems, such as localization and navigation, each encompassing numerous configurable components (e.g., selecting different planning algorithms). Once an algorithm has been selected for a component, its associated configuration options must be set to the appropriate values. Configuration options across the system stack interact nontrivially. Finding optimal configurations for highly configurable robots to achieve desired performance poses a significant challenge due to the interactions between configuration options across software and hardware that result in an exponentially large and complex configuration space. These challenges are further compounded by the need for transferability between different environments and robotic platforms. Data efficient optimization algorithms (e.g., Bayesian optimization) have been increasingly employed to automate the tuning of configurable parameters in cyber-physical systems. However, such optimization algorithms converge at later stages, often after exhausting the allocated budget (e.g., optimization steps, allotted time) and lacking transferability. This article proposes causal understanding and remediation for enhancing robot performance (<monospace>CURE</monospace>)—a method that identifies causally relevant configuration options, enabling the optimization process to operate in a reduced search space, thereby enabling faster optimization of robot performance. <monospace>CURE</monospace> abstracts the causal relationships between various configuration options and the robot performance objectives by learning a causal model in the source (a low-cost environment such as the Gazebo simulator) and applying the learned knowledge to perform optimization in the target (e.g., <italic>Turtlebot 3</i> physical robot). We demonstrate the effectiveness and transferability of <monospace>CURE</monospace> by conducting experiments that involve varying degrees of deployment changes in both physical robots and simulation.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2825-2842"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Environment-Centric Learning Approach for Gait Synthesis in Terrestrial Soft Robots","authors":"Caitlin Freeman;Arun Niddish Mahendran;Vishesh Vikas","doi":"10.1109/TRO.2025.3548543","DOIUrl":"10.1109/TRO.2025.3548543","url":null,"abstract":"Locomotion gaits are fundamental for control of soft terrestrial robots. However, synthesis of these gaits is challenging due to modeling of robot-environment interaction and lack of a mathematical framework. This work presents an environment-centric, data-driven, and fault-tolerant probabilistic model-free control framework that allows for soft multilimb robots to learn from their environment and synthesize diverse sets of locomotion gaits for realizing open-loop control. Here, discretization of factors dominating robot-environment interactions enables an environment-specific graphical representation where the edges encode experimental locomotion data corresponding to the robot motion primitives. In this graph, locomotion gaits are defined as simple cycles that are transformation invariant, i.e., the locomotion is independent of the starting vertex of these periodic cycles. Gait synthesis, the problem of finding optimal locomotion gaits for a given substrate, is formulated as binary integer linear programming problems with a linearized cost function, linear constraints, and iterative simple cycle detection. Experimentally, gaits are synthesized for varying robot-environment interactions. Variables include robot morphology—three-limb and four-limb robots, TerreSoRo-III and TerreSoRo-IV; substrate—rubber mat, whiteboard and carpet; and actuator functionality—simulated loss of robot limb actuation. On an average, gait synthesis improves the translation and rotation speeds by 82% and 97%, respectively. The results highlight that data-driven methods are vital to soft robot locomotion control due to complex robot-environment interactions and simulation-to-reality gaps, particularly when biological analogues are unavailable.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2144-2163"},"PeriodicalIF":9.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Composite Whole-Body Control of Two-Wheeled Robots","authors":"Grazia Zambella;Danilo Caporale;Giorgio Grioli;Lucia Pallottino;Antonio Bicchi","doi":"10.1109/TRO.2025.3548494","DOIUrl":"10.1109/TRO.2025.3548494","url":null,"abstract":"Due to their fast and efficient locomotion, two-wheeled humanoids are fascinating systems with the potential to be involved in many application domains, including healthcare, manufacturing, and many others. However, these robots constitute a challenging case of study for control purposes due to the two-wheeled inverted pendulum dynamics that characterizes their mobility and support, as it is underactuated and unstable. In this article, we propose a novel whole-body control approach to stabilize two-wheeled humanoids. To tackle the control problem of their forward motion and pitch equilibrium, leveraging on the observation that such systems are usually characterized by a faster and a slower dynamics (being the pitch angle faster and the forward displacement slower), we design a composite whole-body control that combines two computed-torque control loops to stabilize both dynamics to the desired trajectories. The control approach is introduced and its derivation is described for the simpler case of a two-wheeled inverted pendulum first, and for a whole two-wheeled humanoid after. To prove its validity, the control approach is tested experimentally on the two-wheeled humanoid robot Alter-Ego. The robot proves to be able to perform complicated interaction tasks, including opening a door, grasping a heavy object, and resisting to external dynamic disturbances.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2301-2321"},"PeriodicalIF":9.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10914559","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TacSL: A Library for Visuotactile Sensor Simulation and Learning","authors":"Iretiayo Akinola;Jie Xu;Jan Carius;Dieter Fox;Yashraj Narang","doi":"10.1109/TRO.2025.3547267","DOIUrl":"10.1109/TRO.2025.3547267","url":null,"abstract":"For both humans and robots, the sense of touch, known as tactile sensing, is critical for performing contact-rich manipulation tasks. Three key challenges in robotic tactile sensing are interpreting sensor signals, generating sensor signals in novel scenarios, and learning sensor-based policies. For visuotactile sensors, interpretation has been facilitated by their close relationship with vision sensors (e.g., RGB cameras). However, generation is still difficult, as visuotactile sensors typically involve contact, deformation, illumination, and imaging, all of which are expensive to simulate; in turn, policy learning has been challenging, as simulation cannot be leveraged for large-scale data collection. We present <italic>TacSL</i> (<italic>taxel</i>), a library for GPU-based visuotactile sensor simulation and learning. <italic>TacSL</i> can be used to simulate visuotactile images and extract contact-force distributions over <inline-formula><tex-math>$200times$</tex-math></inline-formula> faster than the prior state-of-the-art, all within the widely used Isaac simulator. Furthermore, <italic>TacSL</i> provides a learning toolkit containing multiple sensor models, contact-intensive training environments, and online/offline algorithms that can facilitate policy learning for sim-to-real applications. On the algorithmic side, we introduce a novel online reinforcement-learning algorithm called asymmetric actor-critic distillation, designed to effectively and efficiently learn tactile-based policies in simulation that can transfer to the real world. Finally, we demonstrate the utility of our library and algorithms by evaluating the benefits of distillation and multimodal sensing for contact-rich manipulation tasks, and most critically, performing sim-to-real transfer.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2645-2661"},"PeriodicalIF":9.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}