Efficient multi-modal high-precision semantic segmentation from MLS point cloud without 3D annotation

IF 7.6 Q1 REMOTE SENSING
Yuan Wang , Pei Sun , Wenbo Chu , Yuhao Li , Yiping Chen , Hui Lin , Zhen Dong , Bisheng Yang , Chao He
{"title":"Efficient multi-modal high-precision semantic segmentation from MLS point cloud without 3D annotation","authors":"Yuan Wang ,&nbsp;Pei Sun ,&nbsp;Wenbo Chu ,&nbsp;Yuhao Li ,&nbsp;Yiping Chen ,&nbsp;Hui Lin ,&nbsp;Zhen Dong ,&nbsp;Bisheng Yang ,&nbsp;Chao He","doi":"10.1016/j.jag.2024.104243","DOIUrl":null,"url":null,"abstract":"<div><div>Quick and high-precision semantic segmentation from Mobile Laser Scanning (MLS) point clouds faces huge challenges such as large amounts of data, occlusion in complex scenes, and the high annotation cost associated with 3D point clouds. To tackle these challenges, this paper proposes a novel efficient and high-precision semantic segmentation method Mapping Considering Semantic Segmentation (MCSS) for MLS point clouds by leveraging the 2D-3D mapping relationship, which is not only without the need for labeling 3D samples but also complements missing information using multimodal data. According to the results of semantic segmentation on panoramic images by a neural network, a multi-frame mapping strategy and a local spatial similarity optimization method are proposed to project the panoramic image semantic predictions onto point clouds, thereby establishing coarse semantic information in the 3D domain. Then, a hierarchical geometric constraint model (HGCM) is designed to refine high-precision point cloud semantic segmentation. Comprehensive experimental evaluations demonstrate the effect and efficiency of our method in segmenting challenging large-scale MLS two datasets, achieving improvement by 16.8 % and 16.3 % compared with SPT. Furthermore, the proposed method takes an average of 8 s to process 1 million points and does not require annotation and training, surpassing previous methods in terms of efficiency.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"135 ","pages":"Article 104243"},"PeriodicalIF":7.6000,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of applied earth observation and geoinformation : ITC journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1569843224005995","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 0

Abstract

Quick and high-precision semantic segmentation from Mobile Laser Scanning (MLS) point clouds faces huge challenges such as large amounts of data, occlusion in complex scenes, and the high annotation cost associated with 3D point clouds. To tackle these challenges, this paper proposes a novel efficient and high-precision semantic segmentation method Mapping Considering Semantic Segmentation (MCSS) for MLS point clouds by leveraging the 2D-3D mapping relationship, which is not only without the need for labeling 3D samples but also complements missing information using multimodal data. According to the results of semantic segmentation on panoramic images by a neural network, a multi-frame mapping strategy and a local spatial similarity optimization method are proposed to project the panoramic image semantic predictions onto point clouds, thereby establishing coarse semantic information in the 3D domain. Then, a hierarchical geometric constraint model (HGCM) is designed to refine high-precision point cloud semantic segmentation. Comprehensive experimental evaluations demonstrate the effect and efficiency of our method in segmenting challenging large-scale MLS two datasets, achieving improvement by 16.8 % and 16.3 % compared with SPT. Furthermore, the proposed method takes an average of 8 s to process 1 million points and does not require annotation and training, surpassing previous methods in terms of efficiency.
从无三维标注的 MLS 点云进行高效的多模态高精度语义分割
从移动激光扫描(MLS)点云中进行快速、高精度的语义分割面临着巨大的挑战,如大量数据、复杂场景中的遮挡以及与三维点云相关的高注释成本。针对这些挑战,本文提出了一种新型高效、高精度的语义分割方法--"映射考虑语义分割"(Mapping Considering Semantic Segmentation,MCSS),利用二维三维映射关系对移动激光扫描点云进行语义分割,不仅无需标注三维样本,还能利用多模态数据补充缺失信息。根据神经网络对全景图像进行语义分割的结果,提出了多帧映射策略和局部空间相似性优化方法,将全景图像语义预测投射到点云上,从而在三维领域建立粗略的语义信息。然后,设计了分层几何约束模型(HGCM)来细化高精度点云语义分割。综合实验评估证明了我们的方法在分割具有挑战性的大规模 MLS 两个数据集方面的效果和效率,与 SPT 相比,分别提高了 16.8% 和 16.3%。此外,所提出的方法处理 100 万个点平均只需 8 秒,且无需标注和训练,在效率方面超越了之前的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International journal of applied earth observation and geoinformation : ITC journal
International journal of applied earth observation and geoinformation : ITC journal Global and Planetary Change, Management, Monitoring, Policy and Law, Earth-Surface Processes, Computers in Earth Sciences
CiteScore
12.00
自引率
0.00%
发文量
0
审稿时长
77 days
期刊介绍: The International Journal of Applied Earth Observation and Geoinformation publishes original papers that utilize earth observation data for natural resource and environmental inventory and management. These data primarily originate from remote sensing platforms, including satellites and aircraft, supplemented by surface and subsurface measurements. Addressing natural resources such as forests, agricultural land, soils, and water, as well as environmental concerns like biodiversity, land degradation, and hazards, the journal explores conceptual and data-driven approaches. It covers geoinformation themes like capturing, databasing, visualization, interpretation, data quality, and spatial uncertainty.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信