Reg-PointNet++: A CNN Network Based on PointNet++ Architecture for 3D Reconstruction of 3D Objects Modeled by Supershapes

Q3 Computer Science
Hassnae Remmach, Raja Mouachi, M. Sadgal, Aziz El Fazziki
{"title":"Reg-PointNet++: A CNN Network Based on PointNet++ Architecture for 3D Reconstruction of 3D Objects Modeled by Supershapes","authors":"Hassnae Remmach, Raja Mouachi, M. Sadgal, Aziz El Fazziki","doi":"10.18178/joig.11.4.405-413","DOIUrl":null,"url":null,"abstract":"The use of 3D reconstruction in computer vision applications has opened up new avenues for research and development. It has a significant impact on a range of industries, from healthcare to robotics, by improving the performance and abilities of computer vision systems. In this paper we aim to improve 3D reconstruction quality and accuracy. The objective is to develop a model that can learn to extract features, estimate a Supershape parameters and reconstruct 3D directly from input points cloud. In this regard, we present a continuity of our latest works, using a CNN-based Multi-Output and Multi-Task Regressor, for 3D reconstruction from 3D point cloud. We propose another new approach in order to refine our previous methodology and expand our findings. It is about “Reg-PointNet++”, which is mainly based on a PointNet++ architecture adapted for multi-task regression, with the goal of reconstructing a 3D object modeled by Supershapes from 3D point cloud. Given the difficulties encountered in applying convolution to point clouds, our approach is based on the PointNet ++ architecture. It is used to extract features from the 3D point cloud, which are then fed into a Multi-task Regressor for predicting the Supershape parameters needed to reconstruct the shape. The approach has shown promising results in reconstructing 3D objects modeled by Supershapes, demonstrating improved accuracy and robustness to noise and outperforming existing techniques. Visually, the predicted shapes have a high likelihood with the real shapes, as well as a high accuracy rate in a very reasonable number of iterations. Overall, the approach presented in the paper has the potential to significantly improve the accuracy and efficiency of 3D reconstruction, enabling its use in a wider range of applications.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"中国图象图形学报","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.18178/joig.11.4.405-413","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

Abstract

The use of 3D reconstruction in computer vision applications has opened up new avenues for research and development. It has a significant impact on a range of industries, from healthcare to robotics, by improving the performance and abilities of computer vision systems. In this paper we aim to improve 3D reconstruction quality and accuracy. The objective is to develop a model that can learn to extract features, estimate a Supershape parameters and reconstruct 3D directly from input points cloud. In this regard, we present a continuity of our latest works, using a CNN-based Multi-Output and Multi-Task Regressor, for 3D reconstruction from 3D point cloud. We propose another new approach in order to refine our previous methodology and expand our findings. It is about “Reg-PointNet++”, which is mainly based on a PointNet++ architecture adapted for multi-task regression, with the goal of reconstructing a 3D object modeled by Supershapes from 3D point cloud. Given the difficulties encountered in applying convolution to point clouds, our approach is based on the PointNet ++ architecture. It is used to extract features from the 3D point cloud, which are then fed into a Multi-task Regressor for predicting the Supershape parameters needed to reconstruct the shape. The approach has shown promising results in reconstructing 3D objects modeled by Supershapes, demonstrating improved accuracy and robustness to noise and outperforming existing techniques. Visually, the predicted shapes have a high likelihood with the real shapes, as well as a high accuracy rate in a very reasonable number of iterations. Overall, the approach presented in the paper has the potential to significantly improve the accuracy and efficiency of 3D reconstruction, enabling its use in a wider range of applications.
Reg-PointNet++:基于 PointNet++ 架构的 CNN 网络,用于超形状建模的三维物体的三维重建
在计算机视觉应用中使用3D重建为研究和开发开辟了新的途径。它通过提高计算机视觉系统的性能和能力,对从医疗保健到机器人等一系列行业产生了重大影响。本文旨在提高三维重建的质量和精度。目标是开发一个可以学习提取特征,估计Supershape参数并直接从输入点云重建3D的模型。在这方面,我们展示了我们最新工作的连续性,使用基于cnn的多输出和多任务回归器,从3D点云进行3D重建。我们提出了另一种新方法,以完善我们以前的方法并扩展我们的发现。它是关于“regg -PointNet++”的,它主要基于适合多任务回归的PointNet++架构,目标是从3D点云中重建由Supershapes建模的3D对象。考虑到在将卷积应用于点云时遇到的困难,我们的方法是基于pointnet++架构的。它用于从3D点云中提取特征,然后将这些特征输入到多任务回归器中,用于预测重建形状所需的Supershape参数。该方法在重建由Supershapes建模的3D物体方面显示出有希望的结果,证明了提高的准确性和对噪声的鲁棒性,优于现有技术。从视觉上看,预测的形状与实际形状具有很高的可能性,并且在非常合理的迭代次数下具有很高的准确率。总体而言,本文提出的方法具有显著提高3D重建精度和效率的潜力,使其能够在更广泛的应用中使用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
中国图象图形学报
中国图象图形学报 Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
1.20
自引率
0.00%
发文量
6776
期刊介绍: Journal of Image and Graphics (ISSN 1006-8961, CN 11-3758/TB, CODEN ZTTXFZ) is an authoritative academic journal supervised by the Chinese Academy of Sciences and co-sponsored by the Institute of Space and Astronautical Information Innovation of the Chinese Academy of Sciences (ISIAS), the Chinese Society of Image and Graphics (CSIG), and the Beijing Institute of Applied Physics and Computational Mathematics (BIAPM). The journal integrates high-tech theories, technical methods and industrialisation of applied research results in computer image graphics, and mainly publishes innovative and high-level scientific research papers on basic and applied research in image graphics science and its closely related fields. The form of papers includes reviews, technical reports, project progress, academic news, new technology reviews, new product introduction and industrialisation research. The content covers a wide range of fields such as image analysis and recognition, image understanding and computer vision, computer graphics, virtual reality and augmented reality, system simulation, animation, etc., and theme columns are opened according to the research hotspots and cutting-edge topics. Journal of Image and Graphics reaches a wide range of readers, including scientific and technical personnel, enterprise supervisors, and postgraduates and college students of colleges and universities engaged in the fields of national defence, military, aviation, aerospace, communications, electronics, automotive, agriculture, meteorology, environmental protection, remote sensing, mapping, oil field, construction, transportation, finance, telecommunications, education, medical care, film and television, and art. Journal of Image and Graphics is included in many important domestic and international scientific literature database systems, including EBSCO database in the United States, JST database in Japan, Scopus database in the Netherlands, China Science and Technology Thesis Statistics and Analysis (Annual Research Report), China Science Citation Database (CSCD), China Academic Journal Network Publishing Database (CAJD), and China Academic Journal Network Publishing Database (CAJD). China Science Citation Database (CSCD), China Academic Journals Network Publishing Database (CAJD), China Academic Journal Abstracts, Chinese Science Abstracts (Series A), China Electronic Science Abstracts, Chinese Core Journals Abstracts, Chinese Academic Journals on CD-ROM, and China Academic Journals Comprehensive Evaluation Database.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信