IEEE Transactions on Robotics最新文献

筛选
英文 中文
Development of an Electromagnetic Coil Array System for Large-Scale Ferrofluid Droplet Robots Programmable Control 大型铁磁流体液滴机器人可编程控制电磁线圈阵列系统的研制
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-06-30 DOI: 10.1109/TRO.2025.3584430
Guangming Cui;Haozhi Huang;Xianrui Zhang;Yueyue Liu;Qigao Fan;Yining Xu;Ang Liu;Baijin Mao;Tian Qiu;Juntian Qu
{"title":"Development of an Electromagnetic Coil Array System for Large-Scale Ferrofluid Droplet Robots Programmable Control","authors":"Guangming Cui;Haozhi Huang;Xianrui Zhang;Yueyue Liu;Qigao Fan;Yining Xu;Ang Liu;Baijin Mao;Tian Qiu;Juntian Qu","doi":"10.1109/TRO.2025.3584430","DOIUrl":"10.1109/TRO.2025.3584430","url":null,"abstract":"Programmable manipulation of fluid-based soft robots has recently attracted considerable attention. Achieving parallel control of large-scale ferrofluid droplet robots (FDRs) is still one of the major challenges that remain unsolved. In this article, we develop a distributed magnetic field control platform to generate a series of localized magnetic fields that enable the simultaneous control of many FDRs, allowing teams of FDRs to collaborate in parallel for multifunctional manipulation tasks. Based on the mathematical model using the finite element method, we first evaluate the distribution properties of the local magnetic fields as well as the gradients generated by individual electromagnets. Meanwhile, the locomotion and deformation behavior of the FDR is also characterized to verify the actuation performance of the developed system. Subsequently, a vision-based closed-loop feedback control strategy is then presented, which aims to achieve path tracking of multiple robot formations. Thermal analysis shows that the system’s low output power enables reliable and sustained long-term operation. Finally, the developed system is tested through extensive physical experiments with different numbers of FDRs. The results demonstrate the potential of the designed setup in manipulating dozens of FDRs for digital display, message encoding, and microfluidic logistics. To the best of authors’ knowledge, this is the first attempt that allows independent control of such scale droplet robots (up to 72) for cooperative applications.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4342-4359"},"PeriodicalIF":9.4,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144520666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DexSim2Real$^{mathbf{2}}$: Building Explicit World Model for Precise Articulated Object Dexterous Manipulation DexSim2Real$^{mathbf{2}}$:建立精确关节物体灵巧操作的显式世界模型
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-06-30 DOI: 10.1109/TRO.2025.3584504
Taoran Jiang;Yixuan Guan;Liqian Ma;Jing Xu;Jiaojiao Meng;Weihang Chen;Zecui Zeng;Lusong Li;Dan Wu;Rui Chen
{"title":"DexSim2Real$^{mathbf{2}}$: Building Explicit World Model for Precise Articulated Object Dexterous Manipulation","authors":"Taoran Jiang;Yixuan Guan;Liqian Ma;Jing Xu;Jiaojiao Meng;Weihang Chen;Zecui Zeng;Lusong Li;Dan Wu;Rui Chen","doi":"10.1109/TRO.2025.3584504","DOIUrl":"10.1109/TRO.2025.3584504","url":null,"abstract":"Articulated objects are ubiquitous in daily life. In this article, we present DexSim2Real<inline-formula><tex-math>$^{mathbf{2}}$</tex-math></inline-formula>, a novel framework for goal-conditioned articulated object manipulation. The core of our framework is constructing an explicit world model of unseen articulated objects through active interactions, which enables sampling-based model-predictive control to plan trajectories achieving different goals without requiring demonstrations or reinforcement learning. It first predicts an interaction using an affordance network trained on self-supervised interaction data or videos of human manipulation. After executing the interactions on the real robot to move the object parts, we propose a novel modeling pipeline based on 3-D artificial intelligence generated content to build a digital twin of the object in simulation from multiple frames of observations. For dexterous hands, we utilize eigengrasp to reduce the action dimension, enabling more efficient trajectory searching. Experiments validate the framework’s effectiveness for precise manipulation using a suction gripper, a two-finger gripper, and two dexterous hands. The generalizability of the explicit world model also enables advanced manipulation strategies, such as manipulating with tools.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4360-4379"},"PeriodicalIF":9.4,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144520665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LUDO: Low-Latency Understanding of Deformable Objects Using Point Cloud Occupancy Functions LUDO:使用点云占用函数对可变形对象的低延迟理解
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-06-24 DOI: 10.1109/TRO.2025.3582837
Pit Henrich;Franziska Mathis-Ullrich;Paul Maria Scheikl
{"title":"LUDO: Low-Latency Understanding of Deformable Objects Using Point Cloud Occupancy Functions","authors":"Pit Henrich;Franziska Mathis-Ullrich;Paul Maria Scheikl","doi":"10.1109/TRO.2025.3582837","DOIUrl":"10.1109/TRO.2025.3582837","url":null,"abstract":"Accurately determining the shape of deformable objects and the location of their internal structures is crucial for medical tasks that require precise targeting, such as robotic biopsies. We introduce a method for accurate low-latency understanding of deformable objects (LUDO). LUDO reconstructs objects in their deformed state, including their internal structures, from a single-view point cloud observation in under 30 ms using occupancy networks. LUDO provides uncertainty estimates for its predictions. In addition, it provides explainability by highlighting key features in its input observations. Both uncertainty and explainability are important for safety-critical applications, such as surgery. We evaluate LUDO in real-world robotic experiments, achieving a success rate of 98.9% for puncturing various regions of interest (ROIs) inside deformable objects. We compare LUDO to a popular baseline and show its superior ROI localization accuracy, training time, and memory requirements. LUDO demonstrates the potential to interact with deformable objects without the need for deformable registration methods.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4283-4299"},"PeriodicalIF":9.4,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To Lead or to Follow? Adaptive Robot Task Planning in Human–Robot Collaboration 领导还是跟随?人机协作中的自适应机器人任务规划
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-06-24 DOI: 10.1109/TRO.2025.3582816
Ali Noormohammadi-Asl;Stephen L. Smith;Kerstin Dautenhahn
{"title":"To Lead or to Follow? Adaptive Robot Task Planning in Human–Robot Collaboration","authors":"Ali Noormohammadi-Asl;Stephen L. Smith;Kerstin Dautenhahn","doi":"10.1109/TRO.2025.3582816","DOIUrl":"10.1109/TRO.2025.3582816","url":null,"abstract":"Adaptive task planning is fundamental to ensuring effective and seamless human–robot collaboration. This article introduces a robot task planning framework that takes into account both human leading/following preferences and performance, specifically focusing on task allocation and scheduling in collaborative settings. We present a proactive task allocation approach with three primary objectives: 1) enhancing team performance; 2) incorporating human preferences; and 3) upholding a positive human perception of the robot and the collaborative experience. Through a user study, involving an autonomous mobile manipulator robot working alongside participants in a collaborative scenario, we confirm that the task planning framework successfully attains all three intended goals, thereby contributing to the advancement of adaptive task planning in human–robot collaboration. This article mainly focuses on the first two objectives, and we discuss the third objective, participants’ perception of the robot, tasks, and collaboration in a companion article.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4215-4235"},"PeriodicalIF":9.4,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11049037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-Informed Multiagent Reinforcement Learning for Distributed Multirobot Problems 分布式多机器人问题的物理信息多智能体强化学习
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-06-24 DOI: 10.1109/TRO.2025.3582836
Eduardo Sebastián;Thai Duong;Nikolay Atanasov;Eduardo Montijano;Carlos Sagüés
{"title":"Physics-Informed Multiagent Reinforcement Learning for Distributed Multirobot Problems","authors":"Eduardo Sebastián;Thai Duong;Nikolay Atanasov;Eduardo Montijano;Carlos Sagüés","doi":"10.1109/TRO.2025.3582836","DOIUrl":"10.1109/TRO.2025.3582836","url":null,"abstract":"The networked nature of multirobot systems presents challenges in the context of multiagent reinforcement learning. Centralized control policies do not scale with increasing numbers of robots, whereas independent control policies do not exploit the information provided by other robots, exhibiting poor performance in cooperative-competitive tasks. In this work, we propose a physics-informed reinforcement learning approach able to learn distributed multirobot control policies that are both scalable and make use of all the available information to each robot. Our approach has three key characteristics. First, it imposes a port-Hamiltonian structure on the policy representation, respecting energy conservation properties of physical robot systems and the networked nature of robot team interactions. Second, it uses self-attention to ensure a sparse policy representation able to handle time-varying information at each robot from the interaction graph. Third, we present a soft actor–critic reinforcement learning algorithm parameterized by our self-attention port-Hamiltonian control policy, which accounts for the correlation among robots during training while overcoming the need of value function factorization. Extensive simulations in different multirobot scenarios demonstrate the success of the proposed approach, surpassing previous multirobot reinforcement learning solutions in scalability, while achieving similar or superior performance (with averaged cumulative reward up to <inline-formula><tex-math>$times {text{2}}$</tex-math></inline-formula> greater than the state-of-the-art with robot teams <inline-formula><tex-math>$times {text{6}}$</tex-math></inline-formula> larger than the number of robots at training time). We also validate our approach on multiple real robots in the Georgia Tech Robotarium under imperfect communication, demonstrating zero-shot sim-to-real transfer and scalability across number of robots.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4499-4517"},"PeriodicalIF":9.4,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11049031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GS-LIVO: Real-Time LiDAR, Inertial, and Visual Multisensor Fused Odometry With Gaussian Mapping GS-LIVO:实时激光雷达,惯性和视觉多传感器融合里程计与高斯映射
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-06-24 DOI: 10.1109/TRO.2025.3582809
Sheng Hong;Chunran Zheng;Yishu Shen;Changze Li;Fu Zhang;Tong Qin;Shaojie Shen
{"title":"GS-LIVO: Real-Time LiDAR, Inertial, and Visual Multisensor Fused Odometry With Gaussian Mapping","authors":"Sheng Hong;Chunran Zheng;Yishu Shen;Changze Li;Fu Zhang;Tong Qin;Shaojie Shen","doi":"10.1109/TRO.2025.3582809","DOIUrl":"10.1109/TRO.2025.3582809","url":null,"abstract":"In recent years, 3-D Gaussian splatting (3D-GS) has emerged as a novel scene representation approach. However, existing vision-only 3D-GS methods often rely on hand-crafted heuristics for point-cloud densification and face challenges in handling occlusions and high graphics processing unit (GPU) memory and computation consumption. Light detection and ranging (LiDAR)-inertial-visual sensor configuration has demonstrated superior performance in precise localization and dense mapping by leveraging complementary sensing characteristics: rich texture information from cameras, precise geometric measurements from LiDAR, and high-frequency motion data from inertial measurement unit. Inspired by this, we propose a novel real-time Gaussian-based simultaneous localization and mapping system. Our map system comprises a global Gaussian map and a sliding window of Gaussians, along with an iterative error state Kalman filter (IESKF)-based real-time odometry utilizing Gaussian maps. The structure of the global Gaussian map consists of hash-indexed voxels organized in a recursive octree. This hierarchical structure effectively covers sparse spatial volumes while adapting to different levels of detail and scales in the environment. The Gaussian map is efficiently initialized through multisensor fusion and optimized with photometric gradients. Our system incrementally maintains a sliding window of Gaussians with minimal graphics memory usage, significantly reducing GPU computation and memory consumption by only optimizing the map within the sliding window, enabling real-time optimization. Moreover, we implement a tightly coupled multisensor fusion odometry with an IESKF, which leverages real-time updating and rendering of the Gaussian map to achieve competitive localization accuracy. Our system represents the first real-time Gaussian-based SLAM framework deployable on resource-constrained embedded systems (all implemented in C++/CUDA for efficiency), demonstrated on the <italic>NVIDIA Jetson Orin NX</i> platform. The framework achieves real-time performance while maintaining robust multisensor fusion capabilities. All implementation algorithms, hardware designs, and CAD models and demo video of our GPU-accelerated system will be publicly available.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4253-4268"},"PeriodicalIF":9.4,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalizable Motion Policies Through Keypoint Parameterization and Transportation Maps 基于关键点参数化和运输映射的可推广运动策略
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-06-24 DOI: 10.1109/TRO.2025.3582821
Giovanni Franzese;Ravi Prakash;Cosimo Della Santina;Jens Kober
{"title":"Generalizable Motion Policies Through Keypoint Parameterization and Transportation Maps","authors":"Giovanni Franzese;Ravi Prakash;Cosimo Della Santina;Jens Kober","doi":"10.1109/TRO.2025.3582821","DOIUrl":"10.1109/TRO.2025.3582821","url":null,"abstract":"Learning from Interactive Demonstrations has revolutionized the way nonexpert humans teach robots. It is enough to kinesthetically move the robot around to teach pick-and-place, dressing, or cleaning policies. However, the main challenge is correctly generalizing to novel situations, e.g., different surfaces to clean or different arm postures to dress. This article proposes a novel task parameterization and generalization to transport the original robot policy, i.e., position, velocity, orientation, and stiffness. Unlike the state of the art, only a set of keypoints is tracked during the demonstration and the execution, e.g., a point cloud of the surface to clean. We then propose to fit a nonlinear transformation that would deform the space and then the original policy using the paired source and target point sets. The use of function approximators like Gaussian Processes allows us to generalize, or transport, the policy from every space location while estimating the uncertainty of the resulting policy due to the limited task keypoints and the reduced number of demonstrations. We compare the algorithm’s performance with state-of-the-art task parameterization alternatives and analyze the effect of different function approximators. We also validated the algorithm on robot manipulation tasks, i.e., different posture arm dressing, different location product reshelving, and different shape surface cleaning.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4557-4573"},"PeriodicalIF":9.4,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Behavior Cloning-Based Active Scene Recognition via Generated Expert Data With Revision and Prediction for Domestic Robots 基于行为克隆的家用机器人动态场景识别——基于生成专家数据的修正与预测
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-06-24 DOI: 10.1109/TRO.2025.3582814
Shaopeng Liu;Chao Huang;Hailong Huang
{"title":"Behavior Cloning-Based Active Scene Recognition via Generated Expert Data With Revision and Prediction for Domestic Robots","authors":"Shaopeng Liu;Chao Huang;Hailong Huang","doi":"10.1109/TRO.2025.3582814","DOIUrl":"10.1109/TRO.2025.3582814","url":null,"abstract":"Given the limitations of current methods in terms of accuracy and efficiency for robot scene recognition (SR) in domestic environments, this article proposes an active scene recognition (ASR) approach that allows the robot to recognize scenes correctly using less images, even when the robot’s position and observation direction are uncertain. ASR includes a behavior cloning-based action classification model, which can adjust the robot view actively to capture beneficial images for SR. To address the lack of essential expert data for training the action model, we introduce an expert data generation method that avoids time-consuming and inefficient manual data collection. In addition, we present a multiview SR method to handle the multiple images resulting from view changes. This method includes an SR model that scores each image and a revision and prediction method to mitigate the compounding error introduced by behavior cloning as well as output the finial recognition result. We conducted numerous comparative experiments and an ablation study in various domestic environments using a publicly simulated platform to validate our ASR method. The experimental results demonstrate that our proposed approach outperforms state-of-the-art methods in terms of both accuracy and efficiency for SR. Furthermore, our method, trained in simulated environments, demonstrates excellent generalization capabilities, allowing it to be directly transferred to the real world without the need for fine-tuning. When deployed on a TurtleBot 4 robot, it achieves precise and efficient SR in diverse real-world environments.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4180-4194"},"PeriodicalIF":9.4,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Help Me Through: Imitation Learning Based Active View Planning to Avoid SLAM Tracking Failures 帮助我:基于模仿学习的主动视图规划避免SLAM跟踪失败
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-06-24 DOI: 10.1109/TRO.2025.3582817
Kanwal Naveed;Wajahat Hussain;Irfan Hussain;Donghwan Lee;Muhammad Latif Anjum
{"title":"Help Me Through: Imitation Learning Based Active View Planning to Avoid SLAM Tracking Failures","authors":"Kanwal Naveed;Wajahat Hussain;Irfan Hussain;Donghwan Lee;Muhammad Latif Anjum","doi":"10.1109/TRO.2025.3582817","DOIUrl":"10.1109/TRO.2025.3582817","url":null,"abstract":"Large-scale evaluation of state-of-the-art visual simultaneous localization and mapping (SLAM) has shown that its tracking performance degrades considerably if the camera view is not adjusted to avoid the low-texture areas. Deep reinforcement learning (RL)-based approaches have been proposed to improve the robustness of visual tracking in such unsupervised settings. Our extensive analysis reveals the fundamental limitations of RL-based active view planning, especially in transition scenarios (entering/exiting the room, texture-less walls, and lobbies). In challenging transition scenarios, the agent generally remains unable to cross the transition during training, limiting its ability to learn the maneuver. We propose human-supervised RL training (imitation learning) and achieve significantly improved performance after <inline-formula><tex-math>$sim$</tex-math></inline-formula>50 h of supervised training. To reduce longer human supervision requirements, we also explore fine-tuning our network with an online learning policy. Here, we use limited human-supervised training (<inline-formula><tex-math>$sim$</tex-math></inline-formula>20 h), and fine-tune the network with unsupervised training (<inline-formula><tex-math>$sim$</tex-math></inline-formula>45 h), obtaining encouraging results. We also release our multimodel, human supervised training dataset. The dataset contains challenging and diverse transition scenarios and can aid the development of imitation learning policies for consistent visual tracking. We also release our implementation.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4236-4252"},"PeriodicalIF":9.4,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust-Locomotion-By-Logic: Perturbation-Resilient Bipedal Locomotion via Signal Temporal Logic Guided Model Predictive Control 鲁棒逻辑运动:基于信号时间逻辑引导模型预测控制的摄动弹性双足运动
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-06-24 DOI: 10.1109/TRO.2025.3582820
Zhaoyuan Gu;Yuntian Zhao;Yipu Chen;Rongming Guo;Jennifer K. Leestma;Gregory S. Sawicki;Ye Zhao
{"title":"Robust-Locomotion-By-Logic: Perturbation-Resilient Bipedal Locomotion via Signal Temporal Logic Guided Model Predictive Control","authors":"Zhaoyuan Gu;Yuntian Zhao;Yipu Chen;Rongming Guo;Jennifer K. Leestma;Gregory S. Sawicki;Ye Zhao","doi":"10.1109/TRO.2025.3582820","DOIUrl":"10.1109/TRO.2025.3582820","url":null,"abstract":"This study introduces a robust planning framework that utilizes a model predictive control (MPC) approach, enhanced by incorporating signal temporal logic (STL) specifications. This marks the first-ever study to apply STL-guided trajectory optimization for bipedal locomotion, specifically designed to handle both translational and orientational perturbations. Existing recovery strategies often struggle with reasoning complex task logic and evaluating locomotion robustness systematically, making them susceptible to failures caused by inappropriate recovery strategies or lack of robustness. To address these issues, we design an analytical stability metric for bipedal locomotion and quantify this metric using STL specifications, which guide the generation of recovery trajectories to achieve maximum robustness degree. To enable safe and computational-efficient crossed-leg maneuver, we design data-driven self-leg-collision constraints that are 1000 times faster than the traditional inverse-kinematics-based approach. Our framework outperforms a state-of-the-art locomotion controller, a standard MPC without STL, and a linear-temporal-logic-based planner in a high-fidelity dynamic simulation, especially in scenarios involving crossed-leg maneuvers. In addition, the Cassie bipedal robot achieves robust performance under horizontal and orientational perturbations, such as those observed in ship motions. These environments are validated in simulations and deployed on hardware. Furthermore, our proposed method demonstrates versatility on stepping stones and terrain-agnostic features on inclined terrains.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4300-4321"},"PeriodicalIF":9.4,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信