IEEE Transactions on Robotics最新文献

筛选
英文 中文
SandWorm: Event-Based Visuotactile Perception With Active Vibration for Screw-Actuated Robot in Granular Media 沙虫:颗粒介质中螺旋驱动机器人基于事件的主动振动视觉感知
IF 10.5 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2026-01-28 DOI: 10.1109/TRO.2026.3658294
Shoujie Li;Changqing Guo;Junhao Gong;Chenxin Liang;Wenhua Ding;Wenbo Ding
{"title":"SandWorm: Event-Based Visuotactile Perception With Active Vibration for Screw-Actuated Robot in Granular Media","authors":"Shoujie Li;Changqing Guo;Junhao Gong;Chenxin Liang;Wenhua Ding;Wenbo Ding","doi":"10.1109/TRO.2026.3658294","DOIUrl":"10.1109/TRO.2026.3658294","url":null,"abstract":"Perception in granular media remains challenging due to unpredictable particle dynamics. To address this challenge, we present SandWorm, a biomimetic screwactuated robot augmented by peristaltic motion to enhance locomotion, and sandworm tactile sensor (SWTac), a novel event-based visuotactile sensor with an actively vibrated elastomer. The event camera is mechanically decoupled from vibrations by a spring isolation mechanism, enabling high-quality tactile imaging of both dynamic and stationary objects. For algorithm design, we propose an IMU-guided temporal filter to enhance imaging consistency, improving masked signal-to-noise ratio (MSNR) by 24%. Moreover, we systematically optimize SWTac with vibration parameters, event camera settings, and elastomer properties. Motivated by asymmetric edge features, we also implement contact surface estimation by U-Net. Experimental validation demonstrates SWTac’s 0.2 mm texture resolution, 98% stone classification accuracy, and 0.15 N force estimation error, while SandWorm demonstrates versatile locomotion (up to 12.5 mm/s) in challenging terrains, successfully executes pipeline dredging and subsurface exploration in complex granular media (observed 90% success rate). Field experiments further confirm the system’s practical performance.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"42 ","pages":"1008-1027"},"PeriodicalIF":10.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146070593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FilMBot: A High-Speed Soft Parallel Robotic Micromanipulator FilMBot:一种高速软并联微机械臂
IF 10.5 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2026-01-28 DOI: 10.1109/TRO.2026.3658292
Jiangkun Yu;Houari Bettahar;Hakan Kandemir;Quan Zhou
{"title":"FilMBot: A High-Speed Soft Parallel Robotic Micromanipulator","authors":"Jiangkun Yu;Houari Bettahar;Hakan Kandemir;Quan Zhou","doi":"10.1109/TRO.2026.3658292","DOIUrl":"10.1109/TRO.2026.3658292","url":null,"abstract":"Soft robotic manipulators are generally slow despite their great adaptability, resilience, and compliance. This limitation also extends to current soft robotic micromanipulators. Here, we introduce FilMBot, a 3-DOF film-based, electromagnetically actuated, soft kinematic robotic micromanipulator achieving speeds up to 2117°/s and 2456°/s in <italic>α</i> and <italic>β</i> angular motions, with corresponding linear velocities of 1.61 m/s and 1.92 m/s using a 4-cm needle end-effector, 0.54 m/s along the <italic>Z</i>-axis, and 1.57 m/s during <italic>Z</i>-axis morph switching. The robot can reach ∼1.50 m/s in path-following tasks, with an operational bandwidth below ∼30 Hz, and remains responsive at 50 Hz. It demonstrates high precision (∼6.3 μm, or ∼0.05% of its workspace) in path-following tasks, with precision remaining largely stable across frequencies. The novel combination of the low-stiffness soft kinematic film structure and strong electromagnetic actuation in FilMBot opens new avenues for soft robotics. Furthermore, its simple construction and inexpensive, readily accessible components could broaden the application of micromanipulators beyond current academic and professional users.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"42 ","pages":"1145-1157"},"PeriodicalIF":10.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11364173","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146070594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Robotics Publication Information IEEE机器人出版信息汇刊
IF 7.8 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2026-01-27 DOI: 10.1109/tro.2024.3515233
{"title":"IEEE Transactions on Robotics Publication Information","authors":"","doi":"10.1109/tro.2024.3515233","DOIUrl":"https://doi.org/10.1109/tro.2024.3515233","url":null,"abstract":"","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"293 1","pages":"C2-C2"},"PeriodicalIF":7.8,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146056282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning From Videos Through Graph-to-Graphs Generative Modeling for Robotic Manipulation 通过图对图生成建模从视频中学习机器人操作
IF 10.5 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2026-01-27 DOI: 10.1109/TRO.2026.3658211
Guangyan Chen;Meiling Wang;Te Cui;Chengcai Yang;Mengxiao Hu;Haoyang Lu;Zicai Peng;Tianxing Zhou;Xinran Jiang;Yi Yang;Yufeng Yue
{"title":"Learning From Videos Through Graph-to-Graphs Generative Modeling for Robotic Manipulation","authors":"Guangyan Chen;Meiling Wang;Te Cui;Chengcai Yang;Mengxiao Hu;Haoyang Lu;Zicai Peng;Tianxing Zhou;Xinran Jiang;Yi Yang;Yufeng Yue","doi":"10.1109/TRO.2026.3658211","DOIUrl":"10.1109/TRO.2026.3658211","url":null,"abstract":"Learning from demonstration is a powerful method for robotic skill acquisition. Nevertheless, a critical limitation lies in the substantial costs associated with gathering demonstration datasets, typically action-labeled robot data, which create a fundamental constraint in the field. Video data offer a compelling solution as an alternative rich data source, containing diverse behavioral and physical knowledge. This study introduces G3M, an innovative framework that exploits video data via <underline>G</u>raph-to-<underline>G</u>raphs <underline>G</u>enerative <underline>M</u>odeling, which pretrains models to generate future graphs conditioned on the graph within a video frame. The proposed G3M abstracts video frame into graph representations by identifying object and visual action vertices for capturing state information. It then effectively models internal structures and spatial relationships present in these graph constructions, with the objective of predicting forthcoming graphs. The generated graphs function as conditional inputs that guide the control policy in determining robotic behaviors. This concise method effectively encodes critical spatial relationships while facilitating accurate prediction of subsequent graph sequences, thus allowing the development of resilient control policy despite constraints in action-annotated training samples. Furthermore, these transferable graph representations enable the effective extraction of manipulation knowledge through human videos as well as recordings from robots with different embodiments. The experimental results demonstrate that G3M attains superior performance using merely 20% action-labeled data relative to comparable approaches. Moreover, our method outperforms the state-of-the-art method, showing performance gains exceeding 19% in simulated environments and 23% in real-world experiments, while delivering improvements of over 35% in cross-embodiment transfer experiments and exhibiting strong performance on long-horizon tasks.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"42 ","pages":"1158-1177"},"PeriodicalIF":10.5,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146056286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Robotics Information for Authors IEEE机器人信息汇刊
IF 10.5 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2026-01-26 DOI: 10.1109/TRO.2025.3640478
{"title":"IEEE Transactions on Robotics Information for Authors","authors":"","doi":"10.1109/TRO.2025.3640478","DOIUrl":"10.1109/TRO.2025.3640478","url":null,"abstract":"","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"C3-C3"},"PeriodicalIF":10.5,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11364048","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146056284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Risk-Aware Routing for a Robot in a Shared Dynamic Environment 共享动态环境下机器人的风险感知路径
IF 10.5 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2026-01-26 DOI: 10.1109/TRO.2026.3658295
Elena Stracca;Giorgio Grioli;Lucia Pallottino;Paolo Salaris
{"title":"Risk-Aware Routing for a Robot in a Shared Dynamic Environment","authors":"Elena Stracca;Giorgio Grioli;Lucia Pallottino;Paolo Salaris","doi":"10.1109/TRO.2026.3658295","DOIUrl":"10.1109/TRO.2026.3658295","url":null,"abstract":"This article explores the challenge of optimal routing for a mobile robot navigating a dynamic and shared human environment. The primary goal is to minimize the risk of performance degradation during motion, such as delays in completing tasks due to the need for safe or acceptable human– robot encounters. The problem is formulated as a graph whose edge costs become progressively known only as the robot moves through the environment. We model this problem as a Markov decision process (MDP), enabling an offline evaluation of the expected cost of alternative routes based on statistical information about human spatial distributions and possible observations at each intersection. This compact state representation scales linearly with the number of intersections in the map. Since the memoryless property of the MDP may induce loops during online execution, we compute an offline policy and introduce an online policy adaptation mechanism to prevent cyclic behaviors. Extensive simulations across environments of different complexity, and using data collected from real-world experiments, demonstrate that our approach outperforms reactive and advanced state-of-the-art planners in terms of either performance or scalability.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"42 ","pages":"1048-1067"},"PeriodicalIF":10.5,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11364161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146056285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flying Co-Stereo: Enabling Long-Range Aerial Dense Mapping via Collaborative Stereo Vision of Dynamic-Baseline 飞行联合立体:通过动态基线的协同立体视觉实现远程空中密集映射
IF 10.5 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2026-01-26 DOI: 10.1109/TRO.2026.3658293
Zhaoying Wang;Xingxing Zuo;Wei Dong
{"title":"Flying Co-Stereo: Enabling Long-Range Aerial Dense Mapping via Collaborative Stereo Vision of Dynamic-Baseline","authors":"Zhaoying Wang;Xingxing Zuo;Wei Dong","doi":"10.1109/TRO.2026.3658293","DOIUrl":"10.1109/TRO.2026.3658293","url":null,"abstract":"For unmanned aerial vehicle (UAV) swarms operating in large-scale unknown environments, lightweight long-range mapping is crucial for enhancing safe navigation. Traditional stereo cameras constrained by a short fixed baseline suffer from limited perception ranges. To overcome this limitation, we present <italic>flying collaborative stereo (flying co-stereo)</i>, a cross-agent collaborative stereo vision system that leverages the wide-baseline spatial configuration of two UAVs for long-range dense mapping. However, realizing this capability presents several challenges. First, the independent motion of each UAV leads to a dynamic and continuously changing stereo baseline, making accurate and robust estimation difficult. Second, efficiently establishing feature correspondences across independently moving viewpoints is constrained by the limited computational capacity of onboard edge devices. To tackle these challenges, we introduce the <italic>flying co-stereo</i> system within a novel <italic>collaborative dynamic-baseline stereo mapping (CDBSM)</i> framework. We first develop a dual-spectrum visual-inertial-ranging estimator to achieve robust and precise online estimation of the baseline between the two UAVs. In addition, we propose a hybrid feature association strategy that integrates cross-agent feature matching—based on a computationally intensive yet accurate deep neural network—with intra-agent, optical-flow-based lightweight feature tracking. Furthermore, benefiting from the wide baselines between the two UAVs, our system accurately recovers long-range covisible 3-D sparse points. We then employ a monocular depth network to predict up-to-scale dense depth maps, which are refined using accurate metric scales derived from the triangulated sparse points via exponential fitting. Extensive real-world experiments demonstrate that the proposed <italic>flying co-stereo</i> system achieves robust and accurate dynamic baseline estimation in complex environments while maintaining efficient feature matching with resource-constrained computers under varying viewpoints. Ultimately, our system achieves dense 3-D mapping at distances of up to 70 m with a relative error between 2.3% and 9.7%. This corresponds to up to a 350% improvement in maximum perception range and up to a 450% increase in coverage area compared to conventional stereo vision systems with fixed compact baselines.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"42 ","pages":"951-970"},"PeriodicalIF":10.5,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146056283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EROAM: Event-Based Camera Rotational Odometry and Mapping in Real Time EROAM:基于事件的相机旋转里程计和实时映射
IF 10.5 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2026-01-16 DOI: 10.1109/TRO.2026.3654619
Wanli Xing;Shijie Lin;Linhan Yang;Zeqing Zhang;Yanjun Du;Maolin Lei;Yipeng Pan;Chen Wang;Jia Pan
{"title":"EROAM: Event-Based Camera Rotational Odometry and Mapping in Real Time","authors":"Wanli Xing;Shijie Lin;Linhan Yang;Zeqing Zhang;Yanjun Du;Maolin Lei;Yipeng Pan;Chen Wang;Jia Pan","doi":"10.1109/TRO.2026.3654619","DOIUrl":"10.1109/TRO.2026.3654619","url":null,"abstract":"This article presents EROAM, a novel event-based rotational odometry and mapping system that achieves real time, accurate camera rotation estimation. Unlike existing approaches that rely on event generation models or contrast maximization, EROAM employs a spherical event representation by projecting events onto a unit sphere and introduces event spherical iterative closest point, a novel geometric optimization framework designed specifically for event camera data. The spherical representation simplifies rotational motion formulation while operating in a continuous spherical domain, enabling enhanced spatial resolution. Our system features an efficient map management approach using incremental k-d tree structures and intelligent regional density control, ensuring optimal computational performance during long-term operation. Combined with parallel point-to-line optimization, EROAM achieves efficient computation without compromising accuracy. Extensive experiments on both synthetic and real-world datasets show that EROAM significantly outperforms state-of-the-art methods in terms of accuracy, robustness, and computational efficiency. Our method maintains consistent performance under challenging conditions, including high angular velocities and extended sequences, where other methods often fail or show significant drift. In addition, EROAM produces high-quality panoramic reconstructions with preserved fine structural details.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"42 ","pages":"931-950"},"PeriodicalIF":10.5,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145993154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Latency Event-Based Velocimetry for Quadrotor Control in a Narrow Pipe 窄管四旋翼飞行器控制的低延迟事件测速
IF 7.8 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2026-01-16 DOI: 10.1109/tro.2026.3654764
Leonard Bauersfeld, Davide Scaramuzza
{"title":"Low-Latency Event-Based Velocimetry for Quadrotor Control in a Narrow Pipe","authors":"Leonard Bauersfeld, Davide Scaramuzza","doi":"10.1109/tro.2026.3654764","DOIUrl":"https://doi.org/10.1109/tro.2026.3654764","url":null,"abstract":"","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"269 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145993415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Koopman Operators in Robot Learning 机器人学习中的Koopman算子
IF 10.5 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2026-01-15 DOI: 10.1109/TRO.2026.3654384
Lu Shi;Masih Haseli;Giorgos Mamakoukas;Daniel Bruder;Ian Abraham;Todd Murphey;Jorge Cortés;Konstantinos Karydis
{"title":"Koopman Operators in Robot Learning","authors":"Lu Shi;Masih Haseli;Giorgos Mamakoukas;Daniel Bruder;Ian Abraham;Todd Murphey;Jorge Cortés;Konstantinos Karydis","doi":"10.1109/TRO.2026.3654384","DOIUrl":"10.1109/TRO.2026.3654384","url":null,"abstract":"Koopman operator theory offers a rigorous treatment of dynamics, emerging as a robust alternative for learning-based control in robotics. By representing nonlinear dynamics as a linear, higher dimensional operator, it provides a fresh lens for modeling complex systems. Its ability to support incremental updates and low computational cost makes it particularly appealing for real-time applications and online learning. This review delves deeply into the foundations, systematically bridging theoretical principles to practical robotic applications. In this article, we explain mathematical underpinnings, approximation approaches for inputs, data collection strategies, and lifting function design. We explore how Koopman models unify tasks, such as model-based control, state estimation, and motion planning. The review surveys cutting-edge research across domains ranging from aerial and legged platforms to manipulators, soft robots, and multiagent networks. We also present advanced theoretical topics and reflect on open challenges and future research directions. To support adoption, we provide a hands-on tutorial with code at <uri>https://github.com/sunnyshi0310/KoopmanRobo/tree/main</uri>.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"42 ","pages":"1088-1107"},"PeriodicalIF":10.5,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145972390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书