IEEE Transactions on Robotics最新文献

筛选
英文 中文
Robust and Scalable Multi-Robot Localization Using Stereo UWB Arrays 基于立体超宽带阵列的鲁棒可扩展多机器人定位
IF 10.5 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-07-10 DOI: 10.1109/TRO.2025.3587854
Hanying Zhao;Lingwei Xu;Yi Li;Feiyang Wen;Haoran Gao;Changwu Liu;Jincheng Yu;Yu Wang;Yuan Shen
{"title":"Robust and Scalable Multi-Robot Localization Using Stereo UWB Arrays","authors":"Hanying Zhao;Lingwei Xu;Yi Li;Feiyang Wen;Haoran Gao;Changwu Liu;Jincheng Yu;Yu Wang;Yuan Shen","doi":"10.1109/TRO.2025.3587854","DOIUrl":"10.1109/TRO.2025.3587854","url":null,"abstract":"In environments where robots operate with limited global navigation satellite system accessibility, ultra-wideband (UWB) localization technology is a popular auxiliary solution to assist visual–inertial odometry systems. However, current UWB approaches lack 3-D pairwise localization capability and suffer from rapidly declining localization update rates as the network scales, limiting their effectiveness for swarm robotic applications. This article presents a novel UWB sensor that enables 3-D pairwise localization and a localization scheme that can deliver robust, scalable, and accurate position awareness for multi-robot systems. Our approach begins with calibrating intrinsic UWB errors from hardware deviations and propagation effects, yielding high-accuracy distance and direction measurements. Using these measurements, we perform distributed relative localization through inter- and intra-node cooperation by integrating UWB and inertial measurement unit data. To enable swarm-scale operation, our platform implements the signal-multiplexing network ranging protocol to maximize update rates and network capacity. Experimental results show that our approach achieves centimeter-level localization accuracy at high update rates (100 Hz with UWB only), validating its robustness, scalability, and accuracy for robotic applications.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5645-5662"},"PeriodicalIF":10.5,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144603263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensor Model Identification via Simultaneous Model Selection and State Variable Determination 基于模型选择和状态变量确定的传感器模型识别
IF 10.5 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-07-10 DOI: 10.1109/TRO.2025.3588445
Christian Brommer;Alessandro Fornasier;Jan Steinbrener;Stephan Weiss
{"title":"Sensor Model Identification via Simultaneous Model Selection and State Variable Determination","authors":"Christian Brommer;Alessandro Fornasier;Jan Steinbrener;Stephan Weiss","doi":"10.1109/TRO.2025.3588445","DOIUrl":"10.1109/TRO.2025.3588445","url":null,"abstract":"We present a method for the unattended gray-box identification of sensor models commonly used by localization algorithms in the field of robotics. The objective is to determine the most likely sensor model for a time series of unknown measurement data, given an extendable catalog of predefined sensor models. Sensor model definitions may require states for rigid-body calibrations and dedicated reference frames to replicate a measurement based on the robot’s localization state. A health metric is introduced, which verifies the outcome of the selection process in order to detect false positives and facilitate reliable decision-making. In the second stage, an initial guess for identified calibration states is generated, and the necessity of sensor world reference frames is evaluated. The identified sensor model with its parameter information is then used to parameterize and initialize a state estimation application, thus ensuring a more accurate and robust integration of new sensor elements. This method is helpful for inexperienced users who want to identify the source and type of a measurement, sensor calibrations, or sensor reference frames. It will also be important in the field of modular multiagent scenarios and modularized robotic platforms that are augmented by sensor modalities during runtime. Overall, this work aims to provide a simplified integration of sensor modalities to downstream applications and circumvent common pitfalls in the usage and development of localization approaches.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4902-4921"},"PeriodicalIF":10.5,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11078000","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144603261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Bipedal Walking with Closed-Loop MPC–Adios Stabilizers 具有闭环MPC-Adios稳定器的鲁棒双足行走
IF 7.8 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-07-10 DOI: 10.1109/tro.2025.3588452
Antonin Dallard, Mehdi Benallegue, Nicola Scianca, Fumio Kanehiro, Abderrahmane Kheddar
{"title":"Robust Bipedal Walking with Closed-Loop MPC–Adios Stabilizers","authors":"Antonin Dallard, Mehdi Benallegue, Nicola Scianca, Fumio Kanehiro, Abderrahmane Kheddar","doi":"10.1109/tro.2025.3588452","DOIUrl":"https://doi.org/10.1109/tro.2025.3588452","url":null,"abstract":"","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"149 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144603264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Thin Deformable Object Manipulation With a Multisensory Integrated Soft Hand 学习用多感官集成软手操纵薄的可变形物体
IF 10.5 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-07-10 DOI: 10.1109/TRO.2025.3588448
Chao Zhao;Chunli Jiang;Lifan Luo;Shuai Yuan;Qifeng Chen;Hongyu Yu
{"title":"Learning Thin Deformable Object Manipulation With a Multisensory Integrated Soft Hand","authors":"Chao Zhao;Chunli Jiang;Lifan Luo;Shuai Yuan;Qifeng Chen;Hongyu Yu","doi":"10.1109/TRO.2025.3588448","DOIUrl":"10.1109/TRO.2025.3588448","url":null,"abstract":"Robotic manipulation has made significant advancements, with systems demonstrating high precision and repeatability. However, this remarkable precision often fails to translate into efficient manipulation of thin deformable objects. Current robotic systems lack imprecise dexterity, the ability to perform dexterous manipulation through robust and adaptive behaviors that do not rely on precise control. This article explores the singulation and grasping of thin, deformable objects. Here, we propose a novel solution that incorporates passive compliance, touch, and proprioception into thin, deformable object manipulation. Our system employs a soft, underactuated hand that provides passive compliance, facilitating adaptive and gentle interactions to dexterously manipulate deformable objects without requiring precise control. The tactile and force/torque sensors equipped on the hand, along with a depth camera, gather sensory data required for manipulation via the proposed slip module. The manipulation policies are learned directly from raw sensory data via model-free reinforcement learning, bypassing explicit environmental and object modeling. We implement a hierarchical double-loop learning process to enhance learning efficiency by decoupling the action space. Our method was deployed on real-world robots and trained in a self-supervised manner. The resulting policy was tested on a variety of challenging tasks that were beyond the capabilities of prior studies, ranging from displaying suit fabric like a salesperson to turning pages of sheet music for violinists.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4606-4621"},"PeriodicalIF":10.5,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144603262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-Optimizing Reconfigurable Environments and Policies for Decentralized Multi-Agent Navigation 分布式多智能体导航的可重构环境和策略协同优化
IF 7.8 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-07-10 DOI: 10.1109/tro.2025.3588449
Zhan Gao, Guang Yang, Amanda Prorok
{"title":"Co-Optimizing Reconfigurable Environments and Policies for Decentralized Multi-Agent Navigation","authors":"Zhan Gao, Guang Yang, Amanda Prorok","doi":"10.1109/tro.2025.3588449","DOIUrl":"https://doi.org/10.1109/tro.2025.3588449","url":null,"abstract":"","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"28 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144603266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCALER: Versatile Multilimbed Robot for Free-Climbing in Extreme Terrains SCALER:用于极端地形自由攀爬的多功能多肢机器人
IF 10.5 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-07-10 DOI: 10.1109/TRO.2025.3588446
Yusuke Tanaka;Yuki Shirai;Alexander Schperberg;Xuan Lin;Dennis Hong
{"title":"SCALER: Versatile Multilimbed Robot for Free-Climbing in Extreme Terrains","authors":"Yusuke Tanaka;Yuki Shirai;Alexander Schperberg;Xuan Lin;Dennis Hong","doi":"10.1109/TRO.2025.3588446","DOIUrl":"10.1109/TRO.2025.3588446","url":null,"abstract":"This article presents Spine-enhanced Climbing Autonomous Limbed Exploration Robot (SCALER), a versatile free-climbing multilimbed robot that is designed to achieve tightly coupled simultaneous locomotion and dexterous grasping. While existing quadrupedal-limbed robots have demonstrated impressive dexterous capabilities, achieving a balance between power-demanding locomotion and precise grasping remains a critical challenge. We design a torso mechanism and a parallel–serial limb to meet the conflicting requirements that pose unique challenges in hardware design. SCALER employs underactuated two-fingered GOAT grippers that can mechanically adapt and offer seven modes of grasping, enabling SCALER to traverse extreme terrains with multimodal grasping strategies. We study the whole-body approach, where SCALER utilizes its body and limbs to generate additional forces for stable grasping in various environments, thereby further enhancing its versatility. Furthermore, we improve the GOAT gripper actuation speed to realize more dynamic climbing in a closed-loop control fashion. With these proposed technologies, SCALER can traverse vertical, overhanging, upside-down, slippery terrains and bouldering walls with nonconvex-shaped climbing holds under the Earth’s gravity.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5189-5208"},"PeriodicalIF":10.5,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11078008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144603265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptive Mixed-Integer Footstep Control for Underactuated Bipedal Walking on Rough Terrain 欠驱动双足在崎岖地形上行走的感知混合整数步态控制
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-07-10 DOI: 10.1109/TRO.2025.3587998
Brian Acosta;Michael Posa
{"title":"Perceptive Mixed-Integer Footstep Control for Underactuated Bipedal Walking on Rough Terrain","authors":"Brian Acosta;Michael Posa","doi":"10.1109/TRO.2025.3587998","DOIUrl":"10.1109/TRO.2025.3587998","url":null,"abstract":"Traversing rough terrain requires dynamic bipeds to stabilize themselves through foot placement without stepping into unsafe areas. Planning these footsteps online is challenging given the nonconvexity of the safe terrain and imperfect perception and state estimation. This article addresses these challenges with a full-stack perception and control system for achieving underactuated walking on discontinuous terrain. First, we develop model-predictive footstep control, a single mixed-integer quadratic program, which assumes a convex polygon terrain decomposition to optimize over discrete foothold choice, footstep position, ankle torque, template dynamics, and footstep timing at over 100 Hz. We then propose a novel approach for generating convex polygon terrain decompositions online. Our perception stack decouples safe-terrain classification from fitting planar polygons, generating a temporally consistent terrain segmentation in real time using a single CPU thread. We demonstrate the performance of our perception and control stack through outdoor experiments with the underactuated biped Cassie, achieving state of the art perceptive bipedal walking on discontinuous terrain.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4518-4537"},"PeriodicalIF":9.4,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144603351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BEVPlace++: Fast, Robust, and Lightweight LiDAR Global Localization for Autonomous Ground Vehicles BEVPlace++:用于无人地面车辆的快速、鲁棒和轻型激光雷达全球定位
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-07-02 DOI: 10.1109/TRO.2025.3585385
Lun Luo;Si-Yuan Cao;Xiaorui Li;Jintao Xu;Rui Ai;Zhu Yu;Xieyuanli Chen
{"title":"BEVPlace++: Fast, Robust, and Lightweight LiDAR Global Localization for Autonomous Ground Vehicles","authors":"Lun Luo;Si-Yuan Cao;Xiaorui Li;Jintao Xu;Rui Ai;Zhu Yu;Xieyuanli Chen","doi":"10.1109/TRO.2025.3585385","DOIUrl":"10.1109/TRO.2025.3585385","url":null,"abstract":"This article introduces BEVPlace++, a novel, fast, and robust light detection and ranging (LiDAR) global localization method for autonomous ground vehicles (AGV). It uses lightweight convolutional neural networks (CNNs) on bird’s eye view (BEV) image-like representations of LiDAR data to achieve accurate global localization through place recognition, followed by 3-degrees of freedom (DoF) pose estimation. Our detailed analyses reveal an interesting fact that CNNs are inherently effective at extracting distinctive features from LiDAR BEV images. Remarkably, keypoints of two BEV images with large translations can be effectively matched using CNN-extracted features. Building on this insight, we design a rotation equivariant module (REM) to obtain distinctive features while enhancing robustness to rotational changes. A rotation equivariant and invariant network (REIN) is then developed by cascading REM and a descriptor generator, NetVLAD, to sequentially generate rotation equivariant local features and rotation invariant global descriptors. The global descriptors are used first to achieve robust place recognition, and then local features are used for accurate pose estimation. Experimental results on seven public datasets and our AGV platform demonstrate that BEVPlace++, even when trained on a small dataset (3000 frames of KITTI) only with place labels, generalizes well to unseen environments, performs consistently across different days and years, and adapts to various types of LiDAR scanners. BEVPlace++ achieves state-of-the-art performance in multiple tasks, including place recognition, loop closure detection, and global localization. In addition, BEVPlace++ is lightweight, runs in real-time, and does not require accurate pose supervision, making it highly convenient for deployment.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4479-4498"},"PeriodicalIF":9.4,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144547048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Event-Based Visual-Inertial State Estimation for High-Speed Maneuvers 基于事件的高速机动视觉惯性状态估计
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-06-30 DOI: 10.1109/TRO.2025.3584544
Xiuyuan Lu;Yi Zhou;Jiayao Mai;Kuan Dai;Yang Xu;Shaojie Shen
{"title":"Event-Based Visual-Inertial State Estimation for High-Speed Maneuvers","authors":"Xiuyuan Lu;Yi Zhou;Jiayao Mai;Kuan Dai;Yang Xu;Shaojie Shen","doi":"10.1109/TRO.2025.3584544","DOIUrl":"10.1109/TRO.2025.3584544","url":null,"abstract":"Neuromorphic event-based cameras are bioinspired visual sensors with asynchronous pixels and extremely high temporal resolution. Such favorable properties make them an excellent choice for solving state estimation tasks under high-speed maneuvers. However, failures of camera pose tracking are frequently witnessed in state-of-the-art event-based visual odometry systems when the local map cannot be updated timely or feature matching is unreliable. One of the biggest roadblocks in this field is the absence of efficient and robust methods for data association without imposing any assumptions on the environment. This problem seems, however, unlikely to be addressed as in standard vision because of the motion-dependent nature of event data. To address this, we propose a map-free design for event-based visual-inertial state estimation in this article. Instead of estimating camera position, we find that recovering the instantaneous linear velocity aligns better with event cameras’ differential working principle. The proposed system uses raw data from a stereo event camera and an inertial measurement unit (IMU) as input, and adopts a dual-end architecture. The front-end preprocesses raw events and executes the computation of normal flow and depth information. To handle the temporally nonequispaced event data and establish association with temporally nonaligned IMU’s measurements, the back-end employs a continuous-time formulation and a sliding-window scheme that can progressively estimate the linear velocity and IMU’s bias. Experiments on synthetic and real data show our method achieves low-latency, metric-scale velocity estimation. To the best of the authors’ knowledge, this is the first real-time, purely event-based visual-inertial state estimator for high-speed maneuvers, requiring only sufficient textures and imposing no additional constraints on either the environment or motion pattern.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"4439-4458"},"PeriodicalIF":9.4,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144520667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can't Touch This: Real-Time, Safe Motion Planning and Control for Manipulators Under Uncertainty 不能碰这个:不确定条件下机械臂的实时、安全运动规划和控制
IF 7.8 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-06-30 DOI: 10.1109/tro.2025.3584557
Jonathan Michaux, Patrick Holmes, Bohao Zhang, Che Chen, Baiyue Wang, Shrey Sahgal, Tiancheng Zhang, Sidhartha Dey, Shreyas Kousik, Ram Vasudevan
{"title":"Can't Touch This: Real-Time, Safe Motion Planning and Control for Manipulators Under Uncertainty","authors":"Jonathan Michaux, Patrick Holmes, Bohao Zhang, Che Chen, Baiyue Wang, Shrey Sahgal, Tiancheng Zhang, Sidhartha Dey, Shreyas Kousik, Ram Vasudevan","doi":"10.1109/tro.2025.3584557","DOIUrl":"https://doi.org/10.1109/tro.2025.3584557","url":null,"abstract":"","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"633 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144520461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信