Yanghong Li, Li Zheng, Yahao Wang, Erbao Dong, Shiwu Zhang
{"title":"Impedance Learning-based Adaptive Force Tracking for Robot on Unknown Terrains","authors":"Yanghong Li, Li Zheng, Yahao Wang, Erbao Dong, Shiwu Zhang","doi":"10.1109/tro.2025.3530345","DOIUrl":"https://doi.org/10.1109/tro.2025.3530345","url":null,"abstract":"","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"24 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142986813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GCBF+: A Neural Graph Control Barrier Function Framework for Distributed Safe Multi-Agent Control","authors":"Songyuan Zhang, Oswin So, Kunal Garg, Chuchu Fan","doi":"10.1109/tro.2025.3530348","DOIUrl":"https://doi.org/10.1109/tro.2025.3530348","url":null,"abstract":"","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"3 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142986814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ThinTact: Thin Vision-Based Tactile Sensor by Lensless Imaging","authors":"Jing Xu;Weihang Chen;Hongyu Qian;Dan Wu;Rui Chen","doi":"10.1109/TRO.2025.3530319","DOIUrl":"10.1109/TRO.2025.3530319","url":null,"abstract":"Vision-based tactile sensors have drawn increasing interest in the robotics community. However, traditional lens-based designs impose minimum thickness constraints on these sensors, limiting their applicability in space-restricted settings. In this article, we propose ThinTact, a novel lensless vision-based tactile sensor with a sensing field of over 200 mm<inline-formula><tex-math>${}^{2}$</tex-math></inline-formula> and a thickness of less than 10 mm. ThinTact utilizes the mask-based lensless imaging technique to map the contact information to CMOS signals. To ensure real-time tactile sensing, we propose a real-time lensless reconstruction algorithm that leverages a frequency-spatial-domain joint filter based on discrete cosine transform. This algorithm achieves computation significantly faster than existing optimization-based methods. In addition, to improve the sensing quality, we develop a mask optimization method based on the generic algorithm and the corresponding system matrix calibration algorithm. We evaluate the performance of our proposed lensless reconstruction and tactile sensing through qualitative and quantitative experiments. Furthermore, we demonstrate ThinTact's practical applicability in diverse applications, including texture recognition and contact-rich object manipulation.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"1139-1154"},"PeriodicalIF":9.4,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142986815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LIGO: A Tightly Coupled LiDAR-Inertial-GNSS Odometry Based on a Hierarchy Fusion Framework for Global Localization With Real-Time Mapping","authors":"Dongjiao He;Haotian Li;Jie Yin","doi":"10.1109/TRO.2025.3530298","DOIUrl":"10.1109/TRO.2025.3530298","url":null,"abstract":"This article introduces a method for tightly fusing sensors with diverse characteristics to maximize their complementary properties, thereby surpassing the performance of individual components. Specifically, we propose a tightly coupled light detection and ranging (LiDAR)-inertial-global navigation satellite system (GNSS) odometry (LIGO) system, which synthesizes the advantages of LiDAR, inertial measurement unit (IMU), and GNSS. Integrating LiDAR with IMU demonstrates remarkable precision and robustness in high-dynamics and high-speed motions. However, LiDAR-Inertial systems encounter limitations in feature-scarce environments or during large-scale movements. GNSS integration overcomes these challenges by providing global and absolute measurements. LIGO employs an innovative hierarchical fusion approach with both front-end and back-end components to achieve synergistic performance. The front-end of LIGO utilizes a tightly coupled, extended Kalman filter (EKF)-based LiDAR-Inertial system for high-bandwidth localization and real-time mapping within a local-world frame. The back-end tightly integrates the filtered LiDAR-Inertial factors from the front-end with GNSS observations in an extensive factor graph, being more robust to outliers and noises in GNSS observations and producing optimized globally referenced state estimates. These optimized back-end results are then fed back to the front-end through the EKF to ensure a drift-free trajectory, particularly in degenerate and large-scale scenarios. Real-world experiments validate the effectiveness of LIGO, especially when applied to aerial vehicles with outlier-prone GNSS data, demonstrating its resilience to signal losses and data quality fluctuations. LIGO outperforms comparable systems, offering enhanced accuracy and reliability across varying conditions.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"1224-1244"},"PeriodicalIF":9.4,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142986816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hang Shi;Yali Meng;Wenlong Cui;Meng Rao;Shuting Wang;Yangmin Xie
{"title":"Biomimetic Underwater Soft Snake Robot: Self-Motion Sensing and Online Gait Control","authors":"Hang Shi;Yali Meng;Wenlong Cui;Meng Rao;Shuting Wang;Yangmin Xie","doi":"10.1109/TRO.2025.3530349","DOIUrl":"10.1109/TRO.2025.3530349","url":null,"abstract":"This study draws inspiration from the locomotion and adaptability of aquatic snakes to develop an innovative soft-bodied, hydraulic-driven untethered underwater snake robot “BaiLong.” The robot consists of a segmented soft structure and embeds actuation, control, and power modules in the head. Featuring the self-shape perception capability, it leverages an online iterative learning control method to effectively mitigate body shape deformation errors and attain precise gait movements. As a result, the soft robot has achieved movements emulating the serpentine motion of real snakes with locomotion consistency equivalent to rigid robots. Extensive experiments in both artificial and natural aquatic environments have presented improved swimming speed among soft snakes with promising turning agility, and revealed the gait parameter influence on the linear velocity described by a near-constant Strouhal number. The reported investigation sufficiently demonstrates the swimming feasibility and performance of underwater soft snake robots and significantly advances their capabilities for long-range applications.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"1193-1210"},"PeriodicalIF":9.4,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142986391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elena Merlo;Marta Lagomarsino;Edoardo Lamon;Arash Ajoudani
{"title":"Exploiting Information Theory for Intuitive Robot Programming of Manual Activities","authors":"Elena Merlo;Marta Lagomarsino;Edoardo Lamon;Arash Ajoudani","doi":"10.1109/TRO.2025.3530267","DOIUrl":"10.1109/TRO.2025.3530267","url":null,"abstract":"Observational learning is a promising approach to enable people without expertise in programming to transfer skills to robots in a user-friendly manner, since it mirrors how humans learn new behaviors by observing others. Many existing methods focus on instructing robots to mimic human trajectories, but motion-level strategies often pose challenges in skills generalization across diverse environments. This article proposes a novel framework that allows robots to achieve a <italic>higher-level</i> understanding of human-demonstrated manual tasks recorded in RGB videos. By recognizing the task structure and goals, robots generalize what observed to unseen scenarios. We found our task representation on Shannon's Information Theory (IT), which is applied for the first time to manual tasks. IT helps extract the active scene elements and quantify the information shared between hands and objects. We exploit scene graph properties to encode the extracted interaction features in a compact structure and segment the demonstration into blocks, streamlining the generation of behavior trees for robot replicas. Experiments validated the effectiveness of IT to automatically generate robot execution plans from a single human demonstration. In addition, we provide HANDSOME, an open-source dataset of HAND Skills demOnstrated by Multi-subjEcts, to promote further research and evaluation in this field.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"1245-1262"},"PeriodicalIF":9.4,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142986389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dandan Zhang, Wen Fan, Jialin Lin, Haoran Li, Qingzheng Cong, Weiru Liu, Nathan F. Lepora, Shan Luo
{"title":"Design and Benchmarking of a Multi-Modality Sensor for Robotic Manipulation With GAN-Based Cross-Modality Interpretation","authors":"Dandan Zhang, Wen Fan, Jialin Lin, Haoran Li, Qingzheng Cong, Weiru Liu, Nathan F. Lepora, Shan Luo","doi":"10.1109/tro.2025.3526296","DOIUrl":"https://doi.org/10.1109/tro.2025.3526296","url":null,"abstract":"","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"50 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142936530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Riemannian Optimization for Active Mapping With Robot Teams","authors":"Arash Asgharivaskasi;Fritz Girke;Nikolay Atanasov","doi":"10.1109/TRO.2025.3526295","DOIUrl":"10.1109/TRO.2025.3526295","url":null,"abstract":"Autonomous exploration of unknown environments using a team of mobile robots demands distributed perception and planning strategies to enable efficient and scalable performance. Ideally, each robot should update its map and plan its motion not only relying on its own observations, but also considering the observations of its peers. Centralized solutions to multirobot coordination are susceptible to central node failure and require a sophisticated communication infrastructure for reliable operation. Current decentralized active mapping methods consider simplistic robot models with linear-Gaussian observations and Euclidean robot states. In this work, we present a distributed multirobot mapping and planning method, called Riemannian optimization for active mapping (ROAM). We formulate an optimization problem over a graph with node variables belonging to a Riemannian manifold and a consensus constraint requiring feasible solutions to agree on the node variables. We develop a distributed Riemannian optimization algorithm that relies only on one-hop communication to solve the problem with consensus and optimality guarantees. We show that multirobot active mapping can be achieved via two applications of our distributed Riemannian optimization over different manifolds: distributed estimation of a 3-D semantic map and distributed planning of <inline-formula><tex-math>$text{SE}(3)$</tex-math></inline-formula> trajectories that minimize map uncertainty. We demonstrate the performance of ROAM in simulation and real-world experiments using a team of robots with RGB-D cameras.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"1077-1097"},"PeriodicalIF":9.4,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142934938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}