{"title":"UPU-DGTNet: Dynamic Graph Transformer Network for Unsupervised Point Cloud Upsampling","authors":"Lixiang Deng, Bing Han, Shuang Ren","doi":"10.1109/ICCC56324.2022.10065731","DOIUrl":null,"url":null,"abstract":"Most existing point cloud upsampling approaches focus on exploiting dense ground truth point clouds as supervised information to upsample sparse point clouds. However, it is arduous to collect such a high-quality paired sparse-dense dataset for training. Therefore, this paper proposes a novel unsupervised point cloud upsampling network, called UPU-DGTNet, which incorporates dynamic graph convolutions into the hierarchical transformers to better encode local and global point features and generate dense and uniform point clouds without using ground truth point clouds. Specifically, we first propose a dynamic graph transformer (DG T) module as a feature extractor to encode multi-scale local and global point features. In addition, we develop a transformer shuffle (TS) module as an upsampler that leverages the shifted channel cross attention (SCCA) to further aggregate and refine the multi-scale point features. Finally, we introduce the farthest point sample (FPS) method into the reconstruction loss and join the uniform loss to train the network so that the output points could preserve original geometric structures and be distributed uniformly. Various experiments on synthetic and real-scanned datasets demonstrate that our method can achieve impressive results and even competitive performances against some supervised methods.","PeriodicalId":263098,"journal":{"name":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC56324.2022.10065731","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Most existing point cloud upsampling approaches focus on exploiting dense ground truth point clouds as supervised information to upsample sparse point clouds. However, it is arduous to collect such a high-quality paired sparse-dense dataset for training. Therefore, this paper proposes a novel unsupervised point cloud upsampling network, called UPU-DGTNet, which incorporates dynamic graph convolutions into the hierarchical transformers to better encode local and global point features and generate dense and uniform point clouds without using ground truth point clouds. Specifically, we first propose a dynamic graph transformer (DG T) module as a feature extractor to encode multi-scale local and global point features. In addition, we develop a transformer shuffle (TS) module as an upsampler that leverages the shifted channel cross attention (SCCA) to further aggregate and refine the multi-scale point features. Finally, we introduce the farthest point sample (FPS) method into the reconstruction loss and join the uniform loss to train the network so that the output points could preserve original geometric structures and be distributed uniformly. Various experiments on synthetic and real-scanned datasets demonstrate that our method can achieve impressive results and even competitive performances against some supervised methods.