基于事件相机的强鲁棒导航系统

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Xueli Guo;Zhichao Wen;Xuanxuan Zhang;Yizhou Xue;Sikang Liu;Tianxiang Zhang;Xin Xia;You Li
{"title":"基于事件相机的强鲁棒导航系统","authors":"Xueli Guo;Zhichao Wen;Xuanxuan Zhang;Yizhou Xue;Sikang Liu;Tianxiang Zhang;Xin Xia;You Li","doi":"10.1109/JIOT.2025.3552778","DOIUrl":null,"url":null,"abstract":"Accurate positioning and navigation capabilities are essential for Internet of Things (IoT) devices. Event cameras, inspired by biological vision sensors, exhibit robust performance in high-dynamic and low-texture environments and are particularly suitable for IoT applications. However, it faces challenges with accuracy and scale in conventional slow-motion scenarios. Conversely, light detection and ranging (LiDAR) offers high precision in normal motion conditions but degrades significantly under high-dynamic motion. To integrate the advantages of both sensors, this article introduces the EVLINS algorithm, a multisource elastic fusion method based on an extended Kalman filter (EKF). This algorithm combines event-visual-inertial odometry (EVIO), LiDAR-inertial odometry (LIO), and an inertial measurement unit (IMU), utilizing a loosely coupled trajectory layer post-processing technique. This algorithm leverages the robustness of event cameras in highly dynamic environments and the precision of LiDAR in conventional settings, utilizing normalized uncertainty and nonholonomic constraint (NHC) strategies to address LIO’s degradation and EVIO’s accuracy issues. Thorough testing in various indoor and outdoor scenarios with real-world data demonstrates that EVLINS exhibits significantly improved accuracy and robustness compared to both LIO and EVIO algorithms. In large-scale, high-dynamic outdoor environments, EVLINS achieves a 3-D position accuracy of 0.68% over 1333.58 m, improving by 33.21% over LIO and 96.10% over EVIO, which diverged mid-way. In extreme indoor dynamic scenarios, EVLINS reduces maximum position error by 41.55% compared to LIO and improves overall position accuracy by 43.48%, and 22.96% compared to EVIO.","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"12 13","pages":"23636-23650"},"PeriodicalIF":8.9000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EVLINS: Strong Robust Navigation System Based on Event Camera\",\"authors\":\"Xueli Guo;Zhichao Wen;Xuanxuan Zhang;Yizhou Xue;Sikang Liu;Tianxiang Zhang;Xin Xia;You Li\",\"doi\":\"10.1109/JIOT.2025.3552778\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Accurate positioning and navigation capabilities are essential for Internet of Things (IoT) devices. Event cameras, inspired by biological vision sensors, exhibit robust performance in high-dynamic and low-texture environments and are particularly suitable for IoT applications. However, it faces challenges with accuracy and scale in conventional slow-motion scenarios. Conversely, light detection and ranging (LiDAR) offers high precision in normal motion conditions but degrades significantly under high-dynamic motion. To integrate the advantages of both sensors, this article introduces the EVLINS algorithm, a multisource elastic fusion method based on an extended Kalman filter (EKF). This algorithm combines event-visual-inertial odometry (EVIO), LiDAR-inertial odometry (LIO), and an inertial measurement unit (IMU), utilizing a loosely coupled trajectory layer post-processing technique. This algorithm leverages the robustness of event cameras in highly dynamic environments and the precision of LiDAR in conventional settings, utilizing normalized uncertainty and nonholonomic constraint (NHC) strategies to address LIO’s degradation and EVIO’s accuracy issues. Thorough testing in various indoor and outdoor scenarios with real-world data demonstrates that EVLINS exhibits significantly improved accuracy and robustness compared to both LIO and EVIO algorithms. In large-scale, high-dynamic outdoor environments, EVLINS achieves a 3-D position accuracy of 0.68% over 1333.58 m, improving by 33.21% over LIO and 96.10% over EVIO, which diverged mid-way. In extreme indoor dynamic scenarios, EVLINS reduces maximum position error by 41.55% compared to LIO and improves overall position accuracy by 43.48%, and 22.96% compared to EVIO.\",\"PeriodicalId\":54347,\"journal\":{\"name\":\"IEEE Internet of Things Journal\",\"volume\":\"12 13\",\"pages\":\"23636-23650\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Internet of Things Journal\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10934059/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10934059/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

准确的定位和导航能力对于物联网(IoT)设备至关重要。事件相机受生物视觉传感器的启发,在高动态和低纹理环境中表现出强大的性能,特别适合物联网应用。然而,在传统的慢动作场景中,它面临着准确性和规模的挑战。相反,光探测和测距(LiDAR)在正常运动条件下提供高精度,但在高动态运动下显着下降。为了综合这两种传感器的优点,本文介绍了基于扩展卡尔曼滤波(EKF)的多源弹性融合算法EVLINS。该算法结合了事件-视觉-惯性里程计(EVIO)、激光雷达-惯性里程计(LIO)和惯性测量单元(IMU),利用了松散耦合的轨迹层后处理技术。该算法利用了事件相机在高动态环境中的鲁棒性和激光雷达在常规环境中的精度,利用归一化不确定性和非完整约束(NHC)策略来解决LIO的退化和EVIO的精度问题。在各种室内和室外场景中使用真实数据进行的全面测试表明,与LIO和EVIO算法相比,EVLINS具有显著提高的准确性和鲁棒性。在大尺度、高动态的室外环境中,EVLINS在1333.58 m范围内实现了0.68%的三维定位精度,比LIO提高了33.21%,比EVIO提高了96.10%。在极端的室内动态场景下,EVLINS与LIO相比,最大位置误差降低了41.55%,整体位置精度提高了43.48%,与EVIO相比提高了22.96%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
EVLINS: Strong Robust Navigation System Based on Event Camera
Accurate positioning and navigation capabilities are essential for Internet of Things (IoT) devices. Event cameras, inspired by biological vision sensors, exhibit robust performance in high-dynamic and low-texture environments and are particularly suitable for IoT applications. However, it faces challenges with accuracy and scale in conventional slow-motion scenarios. Conversely, light detection and ranging (LiDAR) offers high precision in normal motion conditions but degrades significantly under high-dynamic motion. To integrate the advantages of both sensors, this article introduces the EVLINS algorithm, a multisource elastic fusion method based on an extended Kalman filter (EKF). This algorithm combines event-visual-inertial odometry (EVIO), LiDAR-inertial odometry (LIO), and an inertial measurement unit (IMU), utilizing a loosely coupled trajectory layer post-processing technique. This algorithm leverages the robustness of event cameras in highly dynamic environments and the precision of LiDAR in conventional settings, utilizing normalized uncertainty and nonholonomic constraint (NHC) strategies to address LIO’s degradation and EVIO’s accuracy issues. Thorough testing in various indoor and outdoor scenarios with real-world data demonstrates that EVLINS exhibits significantly improved accuracy and robustness compared to both LIO and EVIO algorithms. In large-scale, high-dynamic outdoor environments, EVLINS achieves a 3-D position accuracy of 0.68% over 1333.58 m, improving by 33.21% over LIO and 96.10% over EVIO, which diverged mid-way. In extreme indoor dynamic scenarios, EVLINS reduces maximum position error by 41.55% compared to LIO and improves overall position accuracy by 43.48%, and 22.96% compared to EVIO.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Internet of Things Journal
IEEE Internet of Things Journal Computer Science-Information Systems
CiteScore
17.60
自引率
13.20%
发文量
1982
期刊介绍: The EEE Internet of Things (IoT) Journal publishes articles and review articles covering various aspects of IoT, including IoT system architecture, IoT enabling technologies, IoT communication and networking protocols such as network coding, and IoT services and applications. Topics encompass IoT's impacts on sensor technologies, big data management, and future internet design for applications like smart cities and smart homes. Fields of interest include IoT architecture such as things-centric, data-centric, service-oriented IoT architecture; IoT enabling technologies and systematic integration such as sensor technologies, big sensor data management, and future Internet design for IoT; IoT services, applications, and test-beds such as IoT service middleware, IoT application programming interface (API), IoT application design, and IoT trials/experiments; IoT standardization activities and technology development in different standard development organizations (SDO) such as IEEE, IETF, ITU, 3GPP, ETSI, etc.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信