{"title":"Exploring high-contrast areas context for 3D point cloud segmentation via MLP-driven Discrepancy mechanism","authors":"Yuyuan Shao , Guofeng Tong , Hao Peng","doi":"10.1016/j.cag.2025.104222","DOIUrl":null,"url":null,"abstract":"<div><div>Recent advancements in 3D point cloud segmentation, such as PointNext and PointVector, revisit the concise PointNet++ architecture. However, these networks struggle to capture sufficient contextual features in significant high-contrast areas. To address this, we propose a High-contrast Global Context Reasoning (HGCR) module and a Self-discrepancy Attention Encoding (SDAE) block to explore the global and local context in high-contrast regions, respectively. Specifically, HGCR leverages an MLP-driven Discrepancy (MLPD) mechanism and a Mean-pooling function to promote long-range information interactions between high-contrast areas and 3D scene. SDAE expands the degree of freedom of attention weights using an MLP-driven Self-discrepancy (MLPSD) strategy, enabling the extraction of discriminating local context in adjacent high-contrast areas. Finally, we propose a deep network called redPointHC, which follows the architecture of PointNext and PointVector. Our PointHC achieves a mIoU of 74.3% on S3DIS (Area 5), delivering superior performance compared to recent methods, surpassing PointNext by 3.5% and PointVector by 2.0%, while using fewer parameters (22.4M). Moreover, we demonstrate competitive performance with mIoU of 79.8% on S3DIS (6-fold cross-validation), improving upon PointNext by 4.9% and PointVector by 1.4%. Code is available at <span><span>https://github.com/ShaoyuyuanNEU/PointHC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104222"},"PeriodicalIF":2.5000,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Graphics-Uk","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0097849325000639","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advancements in 3D point cloud segmentation, such as PointNext and PointVector, revisit the concise PointNet++ architecture. However, these networks struggle to capture sufficient contextual features in significant high-contrast areas. To address this, we propose a High-contrast Global Context Reasoning (HGCR) module and a Self-discrepancy Attention Encoding (SDAE) block to explore the global and local context in high-contrast regions, respectively. Specifically, HGCR leverages an MLP-driven Discrepancy (MLPD) mechanism and a Mean-pooling function to promote long-range information interactions between high-contrast areas and 3D scene. SDAE expands the degree of freedom of attention weights using an MLP-driven Self-discrepancy (MLPSD) strategy, enabling the extraction of discriminating local context in adjacent high-contrast areas. Finally, we propose a deep network called redPointHC, which follows the architecture of PointNext and PointVector. Our PointHC achieves a mIoU of 74.3% on S3DIS (Area 5), delivering superior performance compared to recent methods, surpassing PointNext by 3.5% and PointVector by 2.0%, while using fewer parameters (22.4M). Moreover, we demonstrate competitive performance with mIoU of 79.8% on S3DIS (6-fold cross-validation), improving upon PointNext by 4.9% and PointVector by 1.4%. Code is available at https://github.com/ShaoyuyuanNEU/PointHC.
期刊介绍:
Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on:
1. Research and applications of interactive computer graphics. We are particularly interested in novel interaction techniques and applications of CG to problem domains.
2. State-of-the-art papers on late-breaking, cutting-edge research on CG.
3. Information on innovative uses of graphics principles and technologies.
4. Tutorial papers on both teaching CG principles and innovative uses of CG in education.