Dense Voxel 3D Reconstruction Using a Monocular Event Camera

Haodong Chen, Yuk Ying Chung, Li Tan, Xiaoming Chen
{"title":"Dense Voxel 3D Reconstruction Using a Monocular Event Camera","authors":"Haodong Chen, Yuk Ying Chung, Li Tan, Xiaoming Chen","doi":"10.1109/ICVR57957.2023.10169359","DOIUrl":null,"url":null,"abstract":"Event cameras are sensors inspired by biological systems that specialize in capturing changes in brightness. These emerging cameras offer many advantages over conventional frame-based cameras, including high dynamic range, high frame rates, and extremely low power consumption. Due to these advantages, event cameras have increasingly been adapted in various fields, such as frame interpolation, semantic segmentation, odometry, and SLAM. However, their application in 3D reconstruction for VR applications is underexplored. Previous methods in this field mainly focused on 3D reconstruction through depth map estimation. Methods that produce dense 3D reconstruction generally require multiple cameras, while methods that utilize a single event camera can only produce a semi-dense result. Other single-camera methods that can produce dense 3D reconstruction rely on creating a pipeline that either incorporates the aforementioned methods or other existing Structure from Motion (SfM) or Multi-view Stereo (MVS) methods. In this paper, we propose a novel approach for solving dense 3D reconstruction using only a single event camera. To the best of our knowledge, our work is the first attempt in this regard. Our preliminary results demonstrate that the proposed method can produce visually distinguishable dense 3D reconstructions directly without requiring pipelines like those used by existing methods. Additionally, we have created a synthetic dataset with 39, 739 object scans using an event camera simulator. This dataset will help accelerate other relevant research in this field.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"41 5","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 9th International Conference on Virtual Reality (ICVR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICVR57957.2023.10169359","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Event cameras are sensors inspired by biological systems that specialize in capturing changes in brightness. These emerging cameras offer many advantages over conventional frame-based cameras, including high dynamic range, high frame rates, and extremely low power consumption. Due to these advantages, event cameras have increasingly been adapted in various fields, such as frame interpolation, semantic segmentation, odometry, and SLAM. However, their application in 3D reconstruction for VR applications is underexplored. Previous methods in this field mainly focused on 3D reconstruction through depth map estimation. Methods that produce dense 3D reconstruction generally require multiple cameras, while methods that utilize a single event camera can only produce a semi-dense result. Other single-camera methods that can produce dense 3D reconstruction rely on creating a pipeline that either incorporates the aforementioned methods or other existing Structure from Motion (SfM) or Multi-view Stereo (MVS) methods. In this paper, we propose a novel approach for solving dense 3D reconstruction using only a single event camera. To the best of our knowledge, our work is the first attempt in this regard. Our preliminary results demonstrate that the proposed method can produce visually distinguishable dense 3D reconstructions directly without requiring pipelines like those used by existing methods. Additionally, we have created a synthetic dataset with 39, 739 object scans using an event camera simulator. This dataset will help accelerate other relevant research in this field.
使用单目事件相机进行密集体素三维重建
事件相机是受生物系统启发的传感器,专门捕捉亮度的变化。与传统的基于帧的相机相比,这些新兴的相机具有许多优点,包括高动态范围、高帧率和极低的功耗。由于这些优点,事件相机越来越多地应用于各个领域,如帧插值、语义分割、里程计和SLAM。然而,它们在VR应用的3D重建中的应用尚未得到充分的探索。该领域以前的方法主要是通过深度图估计进行三维重建。产生密集3D重建的方法通常需要多个相机,而利用单个事件相机的方法只能产生半密集的结果。其他可以产生密集3D重建的单摄像机方法依赖于创建一个管道,该管道要么包含上述方法,要么包含其他现有的运动结构(SfM)或多视图立体(MVS)方法。在本文中,我们提出了一种新的方法来解决密集的三维重建只使用一个事件相机。据我们所知,我们的工作是这方面的第一次尝试。我们的初步结果表明,所提出的方法可以直接产生视觉上可区分的密集三维重建,而不需要像现有方法那样使用管道。此外,我们还使用事件相机模拟器创建了一个包含39,739个对象扫描的合成数据集。该数据集将有助于加速该领域的其他相关研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信