Visual Boundary-Guided Pseudo-Labeling for Weakly Supervised 3D Point Cloud Segmentation in Indoor Environments.

Zhuo Su, Lang Zhou, Yudi Tan, Boliang Guan, Fan Zhou
{"title":"Visual Boundary-Guided Pseudo-Labeling for Weakly Supervised 3D Point Cloud Segmentation in Indoor Environments.","authors":"Zhuo Su, Lang Zhou, Yudi Tan, Boliang Guan, Fan Zhou","doi":"10.1109/TVCG.2024.3484654","DOIUrl":null,"url":null,"abstract":"<p><p>Accurate segmentation of 3D point clouds in indoor scenes remains a challenging task, often hindered by the labor-intensive nature of data annotation. While weakly supervised learning approaches have shown promise in leveraging partial annotations, they frequently struggle with imbalanced performance between foreground and background elements due to the complex structures and proximity of objects in indoor environments. To address this issue, we propose a novel foreground-aware label enhancement method utilizing visual boundary priors. Our approach projects 3D point clouds onto 2D planes and applies 2D image segmentation to generate pseudo-labels for foreground objects. These labels are subsequently back-projected into 3D space and used to train an initial segmentation model. We further refine this process by incorporating prior knowledge from projected images to filter the predicted labels, followed by model retraining. We introduce this technique as the Foreground Boundary Prior (FBP), a versatile, plug-and-play module designed to enhance various weakly supervised point cloud segmentation methods. We demonstrate the efficacy of our approach on the widely-used 2D-3D-Semantic dataset, employing both random-sample and bounding-box based weak labeling strategies. Our experimental results show significant improvements in segmentation performance across different architectural backbones, highlighting the method's effectiveness and portability.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2024.3484654","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Accurate segmentation of 3D point clouds in indoor scenes remains a challenging task, often hindered by the labor-intensive nature of data annotation. While weakly supervised learning approaches have shown promise in leveraging partial annotations, they frequently struggle with imbalanced performance between foreground and background elements due to the complex structures and proximity of objects in indoor environments. To address this issue, we propose a novel foreground-aware label enhancement method utilizing visual boundary priors. Our approach projects 3D point clouds onto 2D planes and applies 2D image segmentation to generate pseudo-labels for foreground objects. These labels are subsequently back-projected into 3D space and used to train an initial segmentation model. We further refine this process by incorporating prior knowledge from projected images to filter the predicted labels, followed by model retraining. We introduce this technique as the Foreground Boundary Prior (FBP), a versatile, plug-and-play module designed to enhance various weakly supervised point cloud segmentation methods. We demonstrate the efficacy of our approach on the widely-used 2D-3D-Semantic dataset, employing both random-sample and bounding-box based weak labeling strategies. Our experimental results show significant improvements in segmentation performance across different architectural backbones, highlighting the method's effectiveness and portability.

用于室内环境中弱监督三维点云分割的视觉边界引导伪标签技术
对室内场景中的三维点云进行精确分割仍然是一项极具挑战性的任务,通常会受到数据注释这一劳动密集型工作的阻碍。虽然弱监督学习方法在利用部分注释方面已显示出前景,但由于室内环境中物体的复杂结构和邻近性,这些方法经常会在前景和背景元素之间的不平衡表现中挣扎。为了解决这个问题,我们提出了一种利用视觉边界先验的新型前景感知标签增强方法。我们的方法将三维点云投影到二维平面上,并应用二维图像分割为前景物体生成伪标签。这些标签随后被反向投射到三维空间,并用于训练初始分割模型。我们通过结合投影图像中的先验知识来过滤预测的标签,然后对模型进行再训练,从而进一步完善这一过程。我们将这种技术称为前景边界先验知识(FBP),它是一种通用的即插即用模块,旨在增强各种弱监督点云分割方法。我们在广泛使用的 2D-3D-Semantic 数据集上展示了这种方法的功效,并采用了随机样本和基于边界框的弱标记策略。实验结果表明,在不同的架构骨干上,我们的分割性能都有显著提高,突出了该方法的有效性和可移植性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信