稀疏- e2vid:基于真实事件噪声训练的视频重建稀疏卷积模型

Pablo Rodrigo Gantier Cadena, Yeqiang Qian, Chunxiang Wang, Ming Yang
{"title":"稀疏- e2vid:基于真实事件噪声训练的视频重建稀疏卷积模型","authors":"Pablo Rodrigo Gantier Cadena, Yeqiang Qian, Chunxiang Wang, Ming Yang","doi":"10.1109/CVPRW59228.2023.00437","DOIUrl":null,"url":null,"abstract":"Event cameras are image sensors inspired by biology and offer several advantages over traditional frame-based cameras. However, most algorithms for reconstructing images from event camera data do not exploit the sparsity of events, resulting in inefficient zero-filled data. Given that event cameras typically have a sparse index of 90% or higher, this is particularly wasteful. In this work, we propose a sparse model, Sparse-E2VID, that efficiently reconstructs event-based images, reducing inference time by 30%. Our model takes advantage of the sparsity of event data, making it more computationally efficient, and scales better at higher resolutions. Additionally, by using data augmentation and real noise from an event camera, our model reconstructs nearly noise-free images. In summary, our proposed model efficiently and accurately reconstructs images from event camera data by exploiting the sparsity of events. This has the potential to greatly improve the performance of event-based applications, particularly at higher resolutions. Some results can be seen in the following video: https://youtu.be/sFH9zp6kuWE, 1.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sparse-E2VID: A Sparse Convolutional Model for Event-Based Video Reconstruction Trained with Real Event Noise\",\"authors\":\"Pablo Rodrigo Gantier Cadena, Yeqiang Qian, Chunxiang Wang, Ming Yang\",\"doi\":\"10.1109/CVPRW59228.2023.00437\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Event cameras are image sensors inspired by biology and offer several advantages over traditional frame-based cameras. However, most algorithms for reconstructing images from event camera data do not exploit the sparsity of events, resulting in inefficient zero-filled data. Given that event cameras typically have a sparse index of 90% or higher, this is particularly wasteful. In this work, we propose a sparse model, Sparse-E2VID, that efficiently reconstructs event-based images, reducing inference time by 30%. Our model takes advantage of the sparsity of event data, making it more computationally efficient, and scales better at higher resolutions. Additionally, by using data augmentation and real noise from an event camera, our model reconstructs nearly noise-free images. In summary, our proposed model efficiently and accurately reconstructs images from event camera data by exploiting the sparsity of events. This has the potential to greatly improve the performance of event-based applications, particularly at higher resolutions. Some results can be seen in the following video: https://youtu.be/sFH9zp6kuWE, 1.\",\"PeriodicalId\":355438,\"journal\":{\"name\":\"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVPRW59228.2023.00437\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW59228.2023.00437","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

事件相机是受生物学启发的图像传感器,与传统的基于帧的相机相比,它有几个优点。然而,大多数从事件相机数据重建图像的算法没有利用事件的稀疏性,导致低效的零填充数据。考虑到事件相机通常具有90%或更高的稀疏索引,这是特别浪费的。在这项工作中,我们提出了一个稀疏模型,sparse - e2vid,它可以有效地重建基于事件的图像,将推理时间减少30%。我们的模型利用了事件数据的稀疏性,使其计算效率更高,并且在更高分辨率下可以更好地扩展。此外,通过使用事件相机的数据增强和真实噪声,我们的模型重建了几乎无噪声的图像。综上所述,该模型利用事件的稀疏性,高效、准确地从事件相机数据中重建图像。这有可能极大地提高基于事件的应用程序的性能,特别是在更高的分辨率下。下面的视频可以看到一些结果:https://youtu.be/sFH9zp6kuWE, 1。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Sparse-E2VID: A Sparse Convolutional Model for Event-Based Video Reconstruction Trained with Real Event Noise
Event cameras are image sensors inspired by biology and offer several advantages over traditional frame-based cameras. However, most algorithms for reconstructing images from event camera data do not exploit the sparsity of events, resulting in inefficient zero-filled data. Given that event cameras typically have a sparse index of 90% or higher, this is particularly wasteful. In this work, we propose a sparse model, Sparse-E2VID, that efficiently reconstructs event-based images, reducing inference time by 30%. Our model takes advantage of the sparsity of event data, making it more computationally efficient, and scales better at higher resolutions. Additionally, by using data augmentation and real noise from an event camera, our model reconstructs nearly noise-free images. In summary, our proposed model efficiently and accurately reconstructs images from event camera data by exploiting the sparsity of events. This has the potential to greatly improve the performance of event-based applications, particularly at higher resolutions. Some results can be seen in the following video: https://youtu.be/sFH9zp6kuWE, 1.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信