Dense 3D point-cloud model using optical flow for a monocular reconstruction system

Yakov Diskin, V. Asari
{"title":"Dense 3D point-cloud model using optical flow for a monocular reconstruction system","authors":"Yakov Diskin, V. Asari","doi":"10.1109/AIPR.2013.6749315","DOIUrl":null,"url":null,"abstract":"In this paper, we present an enhanced 3D reconstruction algorithm designed to support an autonomously navigated unmanned ground vehicle. An unmanned system can use the technique to construct a point cloud model of its unknown surroundings. The algorithm presented focuses on the 3D reconstruction of a scene using image sequences captured by only a single moving camera. The original reconstruction process, resulting with a point cloud, was computed utilizing extracted and matched Speeded Up Robust Feature (SURF) points from subsequent video frames. Using depth triangulation analysis, we were able to compute the depth of each feature point within the scene. We concluded that although SURF points are accurate and extremely distinctive, the number of points extracted and matched was not sufficient for our applications. A sparse point cloud model hinders the ability to do further processing for the autonomous system such as object recognition or self-positioning. We present an enhanced version of the algorithm which increases the number of points within the model while maintaining the near real-time computational speeds and accuracy of the original sparse reconstruction. We do so by generating points using both global image characteristics and local SURF feature neighborhood information. Specifically, we generate optical flow disparities using the Horn-Schunck optical flow estimation technique and evaluate the quality of these features for disparity calculations using the SURF keypoint detection method. Areas of the image that locate within SURF feature neighborhoods are tracked using optical flow and used to compute an extremely dense model. The enhanced model contains the high frequency details of the scene that allow for 3D object recognition. The main contribution of the newly added preprocessing steps is measured by evaluating the density and accuracy of the reconstructed point cloud model in relation to real-world measurements.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"256 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2013.6749315","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

In this paper, we present an enhanced 3D reconstruction algorithm designed to support an autonomously navigated unmanned ground vehicle. An unmanned system can use the technique to construct a point cloud model of its unknown surroundings. The algorithm presented focuses on the 3D reconstruction of a scene using image sequences captured by only a single moving camera. The original reconstruction process, resulting with a point cloud, was computed utilizing extracted and matched Speeded Up Robust Feature (SURF) points from subsequent video frames. Using depth triangulation analysis, we were able to compute the depth of each feature point within the scene. We concluded that although SURF points are accurate and extremely distinctive, the number of points extracted and matched was not sufficient for our applications. A sparse point cloud model hinders the ability to do further processing for the autonomous system such as object recognition or self-positioning. We present an enhanced version of the algorithm which increases the number of points within the model while maintaining the near real-time computational speeds and accuracy of the original sparse reconstruction. We do so by generating points using both global image characteristics and local SURF feature neighborhood information. Specifically, we generate optical flow disparities using the Horn-Schunck optical flow estimation technique and evaluate the quality of these features for disparity calculations using the SURF keypoint detection method. Areas of the image that locate within SURF feature neighborhoods are tracked using optical flow and used to compute an extremely dense model. The enhanced model contains the high frequency details of the scene that allow for 3D object recognition. The main contribution of the newly added preprocessing steps is measured by evaluating the density and accuracy of the reconstructed point cloud model in relation to real-world measurements.
基于光流的密集三维点云模型单眼重建系统
在本文中,我们提出了一种增强的三维重建算法,旨在支持自主导航无人地面车辆。无人系统可以使用该技术构建其未知环境的点云模型。该算法的重点是利用单个移动摄像机捕获的图像序列对场景进行三维重建。原始重建过程产生点云,利用从后续视频帧中提取和匹配的加速鲁棒特征(SURF)点进行计算。使用深度三角分析,我们能够计算场景中每个特征点的深度。我们的结论是,虽然SURF点是准确的和极具独特性的,但提取和匹配的点数量不足以满足我们的应用。稀疏的点云模型阻碍了对自治系统进行进一步处理的能力,如物体识别或自定位。我们提出了一个增强版本的算法,增加了模型内的点数量,同时保持了原始稀疏重建的接近实时的计算速度和精度。我们通过使用全局图像特征和局部SURF特征邻域信息来生成点。具体来说,我们使用Horn-Schunck光流估计技术生成光流差异,并使用SURF关键点检测方法评估这些特征的质量以用于视差计算。在SURF特征区域内的图像区域使用光流进行跟踪,并用于计算极其密集的模型。增强的模型包含场景的高频细节,允许3D对象识别。新增加的预处理步骤的主要贡献是通过评估重建点云模型的密度和精度与实际测量值的关系来衡量的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信