Wenhui Wei;Yangfan Zhou;Yimin Hu;Zhi Li;Sen Wang;Xin Liu;Jiadong Li
{"title":"BotVIO:一种基于轻型变压器的机器人视觉惯性里程计","authors":"Wenhui Wei;Yangfan Zhou;Yimin Hu;Zhi Li;Sen Wang;Xin Liu;Jiadong Li","doi":"10.1109/TRO.2025.3577054","DOIUrl":null,"url":null,"abstract":"Visual–inertial odometry (VIO) provides a robust localization solution for simultaneous localization and mapping systems. Self-supervised VIO, a leading approach, has the advantage of not requiring extensive ground-truth labels. Regrettably, this method still poses challenges for robotic applications, particularly uncrewed aerial vehicles, due to its computational complexity arising from inadequate model designs. To address this bottleneck, we introduce BotVIO (where “Bot” refers to “robotics”), a transformer-based self-supervised VIO model, offering an excellent solution to alleviate computational burdens for robotics. Our lightweight backbone combines shallow CNNs with spatial–temporal-enhanced transformers to replace conventional architectures, while the minimalist cross-fusion module uses single-layer cross-attention to enhance multimodal interaction. Extensive experiments show that, during pose estimation, BotVIO achieves a remarkable 70.37% reduction in trainable parameters and a 74.85% decrease in inference speed, reaching up to 57.80 fps on an NVIDIA Jetson NX (10W&2CORE), while improving pose accuracy and robustness. For the benefit of the community, we make public the source code.<sup>1</sup>","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"3760-3778"},"PeriodicalIF":9.4000,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"BotVIO: A Lightweight Transformer-Based Visual–Inertial Odometry for Robotics\",\"authors\":\"Wenhui Wei;Yangfan Zhou;Yimin Hu;Zhi Li;Sen Wang;Xin Liu;Jiadong Li\",\"doi\":\"10.1109/TRO.2025.3577054\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual–inertial odometry (VIO) provides a robust localization solution for simultaneous localization and mapping systems. Self-supervised VIO, a leading approach, has the advantage of not requiring extensive ground-truth labels. Regrettably, this method still poses challenges for robotic applications, particularly uncrewed aerial vehicles, due to its computational complexity arising from inadequate model designs. To address this bottleneck, we introduce BotVIO (where “Bot” refers to “robotics”), a transformer-based self-supervised VIO model, offering an excellent solution to alleviate computational burdens for robotics. Our lightweight backbone combines shallow CNNs with spatial–temporal-enhanced transformers to replace conventional architectures, while the minimalist cross-fusion module uses single-layer cross-attention to enhance multimodal interaction. Extensive experiments show that, during pose estimation, BotVIO achieves a remarkable 70.37% reduction in trainable parameters and a 74.85% decrease in inference speed, reaching up to 57.80 fps on an NVIDIA Jetson NX (10W&2CORE), while improving pose accuracy and robustness. For the benefit of the community, we make public the source code.<sup>1</sup>\",\"PeriodicalId\":50388,\"journal\":{\"name\":\"IEEE Transactions on Robotics\",\"volume\":\"41 \",\"pages\":\"3760-3778\"},\"PeriodicalIF\":9.4000,\"publicationDate\":\"2025-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Robotics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11024235/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Robotics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11024235/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ROBOTICS","Score":null,"Total":0}
BotVIO: A Lightweight Transformer-Based Visual–Inertial Odometry for Robotics
Visual–inertial odometry (VIO) provides a robust localization solution for simultaneous localization and mapping systems. Self-supervised VIO, a leading approach, has the advantage of not requiring extensive ground-truth labels. Regrettably, this method still poses challenges for robotic applications, particularly uncrewed aerial vehicles, due to its computational complexity arising from inadequate model designs. To address this bottleneck, we introduce BotVIO (where “Bot” refers to “robotics”), a transformer-based self-supervised VIO model, offering an excellent solution to alleviate computational burdens for robotics. Our lightweight backbone combines shallow CNNs with spatial–temporal-enhanced transformers to replace conventional architectures, while the minimalist cross-fusion module uses single-layer cross-attention to enhance multimodal interaction. Extensive experiments show that, during pose estimation, BotVIO achieves a remarkable 70.37% reduction in trainable parameters and a 74.85% decrease in inference speed, reaching up to 57.80 fps on an NVIDIA Jetson NX (10W&2CORE), while improving pose accuracy and robustness. For the benefit of the community, we make public the source code.1
期刊介绍:
The IEEE Transactions on Robotics (T-RO) is dedicated to publishing fundamental papers covering all facets of robotics, drawing on interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, and beyond. From industrial applications to service and personal assistants, surgical operations to space, underwater, and remote exploration, robots and intelligent machines play pivotal roles across various domains, including entertainment, safety, search and rescue, military applications, agriculture, and intelligent vehicles.
Special emphasis is placed on intelligent machines and systems designed for unstructured environments, where a significant portion of the environment remains unknown and beyond direct sensing or control.