Exploring high-contrast areas context for 3D point cloud segmentation via MLP-driven Discrepancy mechanism

IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Yuyuan Shao , Guofeng Tong , Hao Peng
{"title":"Exploring high-contrast areas context for 3D point cloud segmentation via MLP-driven Discrepancy mechanism","authors":"Yuyuan Shao ,&nbsp;Guofeng Tong ,&nbsp;Hao Peng","doi":"10.1016/j.cag.2025.104222","DOIUrl":null,"url":null,"abstract":"<div><div>Recent advancements in 3D point cloud segmentation, such as PointNext and PointVector, revisit the concise PointNet++ architecture. However, these networks struggle to capture sufficient contextual features in significant high-contrast areas. To address this, we propose a High-contrast Global Context Reasoning (HGCR) module and a Self-discrepancy Attention Encoding (SDAE) block to explore the global and local context in high-contrast regions, respectively. Specifically, HGCR leverages an MLP-driven Discrepancy (MLPD) mechanism and a Mean-pooling function to promote long-range information interactions between high-contrast areas and 3D scene. SDAE expands the degree of freedom of attention weights using an MLP-driven Self-discrepancy (MLPSD) strategy, enabling the extraction of discriminating local context in adjacent high-contrast areas. Finally, we propose a deep network called redPointHC, which follows the architecture of PointNext and PointVector. Our PointHC achieves a mIoU of 74.3% on S3DIS (Area 5), delivering superior performance compared to recent methods, surpassing PointNext by 3.5% and PointVector by 2.0%, while using fewer parameters (22.4M). Moreover, we demonstrate competitive performance with mIoU of 79.8% on S3DIS (6-fold cross-validation), improving upon PointNext by 4.9% and PointVector by 1.4%. Code is available at <span><span>https://github.com/ShaoyuyuanNEU/PointHC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104222"},"PeriodicalIF":2.5000,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Graphics-Uk","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0097849325000639","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Recent advancements in 3D point cloud segmentation, such as PointNext and PointVector, revisit the concise PointNet++ architecture. However, these networks struggle to capture sufficient contextual features in significant high-contrast areas. To address this, we propose a High-contrast Global Context Reasoning (HGCR) module and a Self-discrepancy Attention Encoding (SDAE) block to explore the global and local context in high-contrast regions, respectively. Specifically, HGCR leverages an MLP-driven Discrepancy (MLPD) mechanism and a Mean-pooling function to promote long-range information interactions between high-contrast areas and 3D scene. SDAE expands the degree of freedom of attention weights using an MLP-driven Self-discrepancy (MLPSD) strategy, enabling the extraction of discriminating local context in adjacent high-contrast areas. Finally, we propose a deep network called redPointHC, which follows the architecture of PointNext and PointVector. Our PointHC achieves a mIoU of 74.3% on S3DIS (Area 5), delivering superior performance compared to recent methods, surpassing PointNext by 3.5% and PointVector by 2.0%, while using fewer parameters (22.4M). Moreover, we demonstrate competitive performance with mIoU of 79.8% on S3DIS (6-fold cross-validation), improving upon PointNext by 4.9% and PointVector by 1.4%. Code is available at https://github.com/ShaoyuyuanNEU/PointHC.

Abstract Image

利用mlp驱动的差异机制探索三维点云分割的高对比度区域背景
最近在3D点云分割方面的进展,如PointNext和PointVector,重新审视了简洁的PointNet++架构。然而,这些网络很难在显著的高对比度区域捕获足够的上下文特征。为了解决这个问题,我们提出了一个高对比度全局上下文推理(HGCR)模块和一个自差异注意编码(SDAE)块,分别在高对比度区域探索全局和局部上下文。具体来说,HGCR利用mlp驱动的差异(MLPD)机制和均值池功能来促进高对比度区域和3D场景之间的远程信息交互。SDAE使用mlp驱动的自差异(MLPSD)策略扩展了注意权重的自由度,从而能够在相邻的高对比度区域中提取有区别的局部上下文。最后,我们提出了一个称为redPointHC的深度网络,它遵循PointNext和PointVector的架构。我们的PointHC在S3DIS (Area 5)上实现了74.3%的mIoU,与最近的方法相比,性能优越,比PointNext高出3.5%,比PointVector高出2.0%,同时使用的参数更少(22.4M)。此外,我们在S3DIS上展示了具有竞争力的性能,mIoU为79.8%(6倍交叉验证),在PointNext上提高4.9%,在PointVector上提高1.4%。代码可从https://github.com/ShaoyuyuanNEU/PointHC获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers & Graphics-Uk
Computers & Graphics-Uk 工程技术-计算机:软件工程
CiteScore
5.30
自引率
12.00%
发文量
173
审稿时长
38 days
期刊介绍: Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on: 1. Research and applications of interactive computer graphics. We are particularly interested in novel interaction techniques and applications of CG to problem domains. 2. State-of-the-art papers on late-breaking, cutting-edge research on CG. 3. Information on innovative uses of graphics principles and technologies. 4. Tutorial papers on both teaching CG principles and innovative uses of CG in education.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信