UPU-DGTNet:无监督点云上采样的动态图变换网络

Lixiang Deng, Bing Han, Shuang Ren
{"title":"UPU-DGTNet:无监督点云上采样的动态图变换网络","authors":"Lixiang Deng, Bing Han, Shuang Ren","doi":"10.1109/ICCC56324.2022.10065731","DOIUrl":null,"url":null,"abstract":"Most existing point cloud upsampling approaches focus on exploiting dense ground truth point clouds as supervised information to upsample sparse point clouds. However, it is arduous to collect such a high-quality paired sparse-dense dataset for training. Therefore, this paper proposes a novel unsupervised point cloud upsampling network, called UPU-DGTNet, which incorporates dynamic graph convolutions into the hierarchical transformers to better encode local and global point features and generate dense and uniform point clouds without using ground truth point clouds. Specifically, we first propose a dynamic graph transformer (DG T) module as a feature extractor to encode multi-scale local and global point features. In addition, we develop a transformer shuffle (TS) module as an upsampler that leverages the shifted channel cross attention (SCCA) to further aggregate and refine the multi-scale point features. Finally, we introduce the farthest point sample (FPS) method into the reconstruction loss and join the uniform loss to train the network so that the output points could preserve original geometric structures and be distributed uniformly. Various experiments on synthetic and real-scanned datasets demonstrate that our method can achieve impressive results and even competitive performances against some supervised methods.","PeriodicalId":263098,"journal":{"name":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"UPU-DGTNet: Dynamic Graph Transformer Network for Unsupervised Point Cloud Upsampling\",\"authors\":\"Lixiang Deng, Bing Han, Shuang Ren\",\"doi\":\"10.1109/ICCC56324.2022.10065731\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most existing point cloud upsampling approaches focus on exploiting dense ground truth point clouds as supervised information to upsample sparse point clouds. However, it is arduous to collect such a high-quality paired sparse-dense dataset for training. Therefore, this paper proposes a novel unsupervised point cloud upsampling network, called UPU-DGTNet, which incorporates dynamic graph convolutions into the hierarchical transformers to better encode local and global point features and generate dense and uniform point clouds without using ground truth point clouds. Specifically, we first propose a dynamic graph transformer (DG T) module as a feature extractor to encode multi-scale local and global point features. In addition, we develop a transformer shuffle (TS) module as an upsampler that leverages the shifted channel cross attention (SCCA) to further aggregate and refine the multi-scale point features. Finally, we introduce the farthest point sample (FPS) method into the reconstruction loss and join the uniform loss to train the network so that the output points could preserve original geometric structures and be distributed uniformly. Various experiments on synthetic and real-scanned datasets demonstrate that our method can achieve impressive results and even competitive performances against some supervised methods.\",\"PeriodicalId\":263098,\"journal\":{\"name\":\"2022 IEEE 8th International Conference on Computer and Communications (ICCC)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 8th International Conference on Computer and Communications (ICCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCC56324.2022.10065731\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC56324.2022.10065731","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

现有的点云上采样方法主要是利用密集的地面真点云作为监督信息对稀疏点云进行上采样。然而,要收集如此高质量的配对稀疏密集数据集进行训练是非常困难的。为此,本文提出了一种新颖的无监督点云上采样网络UPU-DGTNet,该网络将动态图卷积融入到层次变换中,以更好地编码局部和全局点特征,从而在不使用地面真值点云的情况下生成密集均匀的点云。具体来说,我们首先提出了一个动态图转换器(DG T)模块作为特征提取器来编码多尺度局部和全局点特征。此外,我们开发了一个变压器洗牌(TS)模块作为上采样器,利用移位通道交叉注意(SCCA)进一步聚合和细化多尺度点特征。最后,在重构损失中引入最远点样本(FPS)方法,加入均匀损失对网络进行训练,使输出点保持原有的几何结构并均匀分布。在合成和真实扫描数据集上的各种实验表明,我们的方法可以取得令人印象深刻的结果,甚至可以与一些监督方法相媲美。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
UPU-DGTNet: Dynamic Graph Transformer Network for Unsupervised Point Cloud Upsampling
Most existing point cloud upsampling approaches focus on exploiting dense ground truth point clouds as supervised information to upsample sparse point clouds. However, it is arduous to collect such a high-quality paired sparse-dense dataset for training. Therefore, this paper proposes a novel unsupervised point cloud upsampling network, called UPU-DGTNet, which incorporates dynamic graph convolutions into the hierarchical transformers to better encode local and global point features and generate dense and uniform point clouds without using ground truth point clouds. Specifically, we first propose a dynamic graph transformer (DG T) module as a feature extractor to encode multi-scale local and global point features. In addition, we develop a transformer shuffle (TS) module as an upsampler that leverages the shifted channel cross attention (SCCA) to further aggregate and refine the multi-scale point features. Finally, we introduce the farthest point sample (FPS) method into the reconstruction loss and join the uniform loss to train the network so that the output points could preserve original geometric structures and be distributed uniformly. Various experiments on synthetic and real-scanned datasets demonstrate that our method can achieve impressive results and even competitive performances against some supervised methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信