Reg-PointNet++:基于 PointNet++ 架构的 CNN 网络,用于超形状建模的三维物体的三维重建

Q3 Computer Science
Hassnae Remmach, Raja Mouachi, M. Sadgal, Aziz El Fazziki
{"title":"Reg-PointNet++:基于 PointNet++ 架构的 CNN 网络,用于超形状建模的三维物体的三维重建","authors":"Hassnae Remmach, Raja Mouachi, M. Sadgal, Aziz El Fazziki","doi":"10.18178/joig.11.4.405-413","DOIUrl":null,"url":null,"abstract":"The use of 3D reconstruction in computer vision applications has opened up new avenues for research and development. It has a significant impact on a range of industries, from healthcare to robotics, by improving the performance and abilities of computer vision systems. In this paper we aim to improve 3D reconstruction quality and accuracy. The objective is to develop a model that can learn to extract features, estimate a Supershape parameters and reconstruct 3D directly from input points cloud. In this regard, we present a continuity of our latest works, using a CNN-based Multi-Output and Multi-Task Regressor, for 3D reconstruction from 3D point cloud. We propose another new approach in order to refine our previous methodology and expand our findings. It is about “Reg-PointNet++”, which is mainly based on a PointNet++ architecture adapted for multi-task regression, with the goal of reconstructing a 3D object modeled by Supershapes from 3D point cloud. Given the difficulties encountered in applying convolution to point clouds, our approach is based on the PointNet ++ architecture. It is used to extract features from the 3D point cloud, which are then fed into a Multi-task Regressor for predicting the Supershape parameters needed to reconstruct the shape. The approach has shown promising results in reconstructing 3D objects modeled by Supershapes, demonstrating improved accuracy and robustness to noise and outperforming existing techniques. Visually, the predicted shapes have a high likelihood with the real shapes, as well as a high accuracy rate in a very reasonable number of iterations. Overall, the approach presented in the paper has the potential to significantly improve the accuracy and efficiency of 3D reconstruction, enabling its use in a wider range of applications.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reg-PointNet++: A CNN Network Based on PointNet++ Architecture for 3D Reconstruction of 3D Objects Modeled by Supershapes\",\"authors\":\"Hassnae Remmach, Raja Mouachi, M. Sadgal, Aziz El Fazziki\",\"doi\":\"10.18178/joig.11.4.405-413\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The use of 3D reconstruction in computer vision applications has opened up new avenues for research and development. It has a significant impact on a range of industries, from healthcare to robotics, by improving the performance and abilities of computer vision systems. In this paper we aim to improve 3D reconstruction quality and accuracy. The objective is to develop a model that can learn to extract features, estimate a Supershape parameters and reconstruct 3D directly from input points cloud. In this regard, we present a continuity of our latest works, using a CNN-based Multi-Output and Multi-Task Regressor, for 3D reconstruction from 3D point cloud. We propose another new approach in order to refine our previous methodology and expand our findings. It is about “Reg-PointNet++”, which is mainly based on a PointNet++ architecture adapted for multi-task regression, with the goal of reconstructing a 3D object modeled by Supershapes from 3D point cloud. Given the difficulties encountered in applying convolution to point clouds, our approach is based on the PointNet ++ architecture. It is used to extract features from the 3D point cloud, which are then fed into a Multi-task Regressor for predicting the Supershape parameters needed to reconstruct the shape. The approach has shown promising results in reconstructing 3D objects modeled by Supershapes, demonstrating improved accuracy and robustness to noise and outperforming existing techniques. Visually, the predicted shapes have a high likelihood with the real shapes, as well as a high accuracy rate in a very reasonable number of iterations. Overall, the approach presented in the paper has the potential to significantly improve the accuracy and efficiency of 3D reconstruction, enabling its use in a wider range of applications.\",\"PeriodicalId\":36336,\"journal\":{\"name\":\"中国图象图形学报\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"中国图象图形学报\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://doi.org/10.18178/joig.11.4.405-413\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"中国图象图形学报","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.18178/joig.11.4.405-413","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

摘要

在计算机视觉应用中使用3D重建为研究和开发开辟了新的途径。它通过提高计算机视觉系统的性能和能力,对从医疗保健到机器人等一系列行业产生了重大影响。本文旨在提高三维重建的质量和精度。目标是开发一个可以学习提取特征,估计Supershape参数并直接从输入点云重建3D的模型。在这方面,我们展示了我们最新工作的连续性,使用基于cnn的多输出和多任务回归器,从3D点云进行3D重建。我们提出了另一种新方法,以完善我们以前的方法并扩展我们的发现。它是关于“regg -PointNet++”的,它主要基于适合多任务回归的PointNet++架构,目标是从3D点云中重建由Supershapes建模的3D对象。考虑到在将卷积应用于点云时遇到的困难,我们的方法是基于pointnet++架构的。它用于从3D点云中提取特征,然后将这些特征输入到多任务回归器中,用于预测重建形状所需的Supershape参数。该方法在重建由Supershapes建模的3D物体方面显示出有希望的结果,证明了提高的准确性和对噪声的鲁棒性,优于现有技术。从视觉上看,预测的形状与实际形状具有很高的可能性,并且在非常合理的迭代次数下具有很高的准确率。总体而言,本文提出的方法具有显著提高3D重建精度和效率的潜力,使其能够在更广泛的应用中使用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reg-PointNet++: A CNN Network Based on PointNet++ Architecture for 3D Reconstruction of 3D Objects Modeled by Supershapes
The use of 3D reconstruction in computer vision applications has opened up new avenues for research and development. It has a significant impact on a range of industries, from healthcare to robotics, by improving the performance and abilities of computer vision systems. In this paper we aim to improve 3D reconstruction quality and accuracy. The objective is to develop a model that can learn to extract features, estimate a Supershape parameters and reconstruct 3D directly from input points cloud. In this regard, we present a continuity of our latest works, using a CNN-based Multi-Output and Multi-Task Regressor, for 3D reconstruction from 3D point cloud. We propose another new approach in order to refine our previous methodology and expand our findings. It is about “Reg-PointNet++”, which is mainly based on a PointNet++ architecture adapted for multi-task regression, with the goal of reconstructing a 3D object modeled by Supershapes from 3D point cloud. Given the difficulties encountered in applying convolution to point clouds, our approach is based on the PointNet ++ architecture. It is used to extract features from the 3D point cloud, which are then fed into a Multi-task Regressor for predicting the Supershape parameters needed to reconstruct the shape. The approach has shown promising results in reconstructing 3D objects modeled by Supershapes, demonstrating improved accuracy and robustness to noise and outperforming existing techniques. Visually, the predicted shapes have a high likelihood with the real shapes, as well as a high accuracy rate in a very reasonable number of iterations. Overall, the approach presented in the paper has the potential to significantly improve the accuracy and efficiency of 3D reconstruction, enabling its use in a wider range of applications.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
中国图象图形学报
中国图象图形学报 Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
1.20
自引率
0.00%
发文量
6776
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信