Evaluating SLAM Performance with Synthesized Datasets from Unreal-Based Emulators

Muchang Bahng
{"title":"Evaluating SLAM Performance with Synthesized Datasets from Unreal-Based Emulators","authors":"Muchang Bahng","doi":"10.55894/dv2.22","DOIUrl":null,"url":null,"abstract":"The rapid advancement in visual-inertial simultaneous localization and mapping (SLAM) has opened numerous applications in computer vision. However, the scarcity of high quality, publicly accessible datasets hampers the evaluation of SLAM performance in varied and tailored environments. In this study, I employed the AirSim simulator and the Unreal Engine 4 to generate a trajectory resembling that of the TUM VI Room 1 ground truth dataset within the ArchViz indoor environment representing a well-lit, furnished room. I further modified the environment and trajectory through various expansions, addition of features, and data smoothing to ensure a more stable sequence of input frames into the SLAM architecture. I then examined the efficiency of visual ORB-SLAM3 by inputting images of resolution 256×144 and 512×288 at 30 frames per second (FPS), while also adjusting the feature threshold - the maximum number of feature points that ORB-SLAM3 tracks per frame. This investigation of the camera parameters within AirSim and ORB-SLAM3 has led to the essential finding that the resolution of the input images must coincide with the dimensions of the film. The subsequent runs under these variables reveal that higher resolution images lead to considerably better tracking, with an optimal feature threshold ranging between 3000~12000 feature points per frame. Moreover, ORB- SLAM3 demonstrated significantly enhanced robustness within dynamic environments containing moving objects when using higher resolution inputs, with a decreased error of close to 0cm compared to 23.19cm for lower resolutions (averaged over three runs). Finally, I conducted qualitative testing using real-life indoor environments recorded with an iPhone Xr camera, which produces results that highlight the challenges faced by ORB-SLAM3 due to factors such as glare and motion blur.","PeriodicalId":299908,"journal":{"name":"Vertices: Duke's Undergraduate Research Journal","volume":"264 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Vertices: Duke's Undergraduate Research Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.55894/dv2.22","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The rapid advancement in visual-inertial simultaneous localization and mapping (SLAM) has opened numerous applications in computer vision. However, the scarcity of high quality, publicly accessible datasets hampers the evaluation of SLAM performance in varied and tailored environments. In this study, I employed the AirSim simulator and the Unreal Engine 4 to generate a trajectory resembling that of the TUM VI Room 1 ground truth dataset within the ArchViz indoor environment representing a well-lit, furnished room. I further modified the environment and trajectory through various expansions, addition of features, and data smoothing to ensure a more stable sequence of input frames into the SLAM architecture. I then examined the efficiency of visual ORB-SLAM3 by inputting images of resolution 256×144 and 512×288 at 30 frames per second (FPS), while also adjusting the feature threshold - the maximum number of feature points that ORB-SLAM3 tracks per frame. This investigation of the camera parameters within AirSim and ORB-SLAM3 has led to the essential finding that the resolution of the input images must coincide with the dimensions of the film. The subsequent runs under these variables reveal that higher resolution images lead to considerably better tracking, with an optimal feature threshold ranging between 3000~12000 feature points per frame. Moreover, ORB- SLAM3 demonstrated significantly enhanced robustness within dynamic environments containing moving objects when using higher resolution inputs, with a decreased error of close to 0cm compared to 23.19cm for lower resolutions (averaged over three runs). Finally, I conducted qualitative testing using real-life indoor environments recorded with an iPhone Xr camera, which produces results that highlight the challenges faced by ORB-SLAM3 due to factors such as glare and motion blur.
利用基于虚幻模拟器的合成数据集评估 SLAM 性能
视觉-惯性同步定位与绘图(SLAM)技术的飞速发展为计算机视觉领域带来了众多应用。然而,高质量、可公开访问的数据集的稀缺阻碍了在各种定制环境中对 SLAM 性能的评估。在本研究中,我使用 AirSim 模拟器和虚幻引擎 4,在 ArchViz 室内环境中生成了与 TUM VI Room 1 地面真实数据集相似的轨迹,该环境代表了一个光线充足、家具齐全的房间。我通过各种扩展、添加特征和数据平滑进一步修改了环境和轨迹,以确保输入 SLAM 架构的帧序列更加稳定。然后,我以每秒 30 帧(FPS)的速度输入分辨率为 256×144 和 512×288 的图像,同时调整特征阈值--即 ORB-SLAM3 每帧跟踪的最大特征点数量--来检验视觉 ORB-SLAM3 的效率。在 AirSim 和 ORB-SLAM3 中对摄像机参数的研究得出了一个重要结论,即输入图像的分辨率必须与胶片的尺寸一致。在这些变量下进行的后续运行表明,图像分辨率越高,跟踪效果越好,最佳特征阈值为每帧 3000~12000 个特征点。此外,当使用更高分辨率的输入时,ORB- SLAM3 在包含移动物体的动态环境中的鲁棒性明显增强,与较低分辨率的 23.19 厘米相比,误差减少了接近 0 厘米(三次运行的平均值)。最后,我使用 iPhone Xr 摄像头记录的真实室内环境进行了定性测试,测试结果凸显了 ORB-SLAM3 因眩光和运动模糊等因素而面临的挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信