ForestAlign:用于多视图TLS和ALS点云的基于森林结构的自动对齐

IF 5.7 Q1 ENVIRONMENTAL SCIENCES
Juan Castorena , L. Turin Dickman , Adam J. Killebrew , James R. Gattiker , Rod Linn , E. Louise Loudermilk
{"title":"ForestAlign:用于多视图TLS和ALS点云的基于森林结构的自动对齐","authors":"Juan Castorena ,&nbsp;L. Turin Dickman ,&nbsp;Adam J. Killebrew ,&nbsp;James R. Gattiker ,&nbsp;Rod Linn ,&nbsp;E. Louise Loudermilk","doi":"10.1016/j.srs.2024.100194","DOIUrl":null,"url":null,"abstract":"<div><div>Access to highly detailed models of heterogeneous forests, spanning from the near surface to above the tree canopy at varying scales, is increasingly in demand. This enables advanced computational tools for analysis, planning, and ecosystem management. LiDAR sensors, available through terrestrial (TLS) and aerial (ALS) scanning platforms, have become established as primary technologies for forest monitoring due to their capability to rapidly collect precise 3D structural information directly. Selection of these platforms typically depends on the scales (tree-level, plot, regional) required for observational or intervention studies. Forestry now recognizes the benefits of a multi-scale approach, leveraging the strengths of each platform while minimizing individual source uncertainties. However, effective integration of these LiDAR sources relies heavily on efficient multi-scale, multi-view co-registration or point-cloud alignment methods. In GPS-denied areas, forestry has traditionally relied on target-based co-registration methods (e.g., reflective or marked trees), which are impractical at scale. Here, we propose ForestAlign: an effective, target-less, and fully automatic co-registration method for aligning forest point clouds collected from multi-view, multi-scale LiDAR sources. Our co-registration approach employs an incremental alignment strategy, grouping and aggregating 3D points based on increasing levels of structural complexity. This strategy aligns 3D points from less complex (e.g., ground surface) to more complex structures (e.g., tree trunks/branches, foliage) sequentially, refining alignment iteratively. Empirical evidence demonstrates the method’s effectiveness in aligning TLS-to-TLS and TLS-to-ALS scans locally, across various ecosystem conditions, including pre/post fire treatment effects. In TLS-to-TLS scenarios, parameter RMSE errors were less than 0.75 degrees in rotation and 5.5 cm in translation. For TLS-to-ALS, corresponding errors were less than 0.8 degrees and 8 cm, respectively. These results, show that our ForestAlign method is effective for co-registering both TLS-to-TLS and TLS-to-ALS in such forest environments, without relying on targets, while achieving high performance.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100194"},"PeriodicalIF":5.7000,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ForestAlign: Automatic forest structure-based alignment for multi-view TLS and ALS point clouds\",\"authors\":\"Juan Castorena ,&nbsp;L. Turin Dickman ,&nbsp;Adam J. Killebrew ,&nbsp;James R. Gattiker ,&nbsp;Rod Linn ,&nbsp;E. Louise Loudermilk\",\"doi\":\"10.1016/j.srs.2024.100194\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Access to highly detailed models of heterogeneous forests, spanning from the near surface to above the tree canopy at varying scales, is increasingly in demand. This enables advanced computational tools for analysis, planning, and ecosystem management. LiDAR sensors, available through terrestrial (TLS) and aerial (ALS) scanning platforms, have become established as primary technologies for forest monitoring due to their capability to rapidly collect precise 3D structural information directly. Selection of these platforms typically depends on the scales (tree-level, plot, regional) required for observational or intervention studies. Forestry now recognizes the benefits of a multi-scale approach, leveraging the strengths of each platform while minimizing individual source uncertainties. However, effective integration of these LiDAR sources relies heavily on efficient multi-scale, multi-view co-registration or point-cloud alignment methods. In GPS-denied areas, forestry has traditionally relied on target-based co-registration methods (e.g., reflective or marked trees), which are impractical at scale. Here, we propose ForestAlign: an effective, target-less, and fully automatic co-registration method for aligning forest point clouds collected from multi-view, multi-scale LiDAR sources. Our co-registration approach employs an incremental alignment strategy, grouping and aggregating 3D points based on increasing levels of structural complexity. This strategy aligns 3D points from less complex (e.g., ground surface) to more complex structures (e.g., tree trunks/branches, foliage) sequentially, refining alignment iteratively. Empirical evidence demonstrates the method’s effectiveness in aligning TLS-to-TLS and TLS-to-ALS scans locally, across various ecosystem conditions, including pre/post fire treatment effects. In TLS-to-TLS scenarios, parameter RMSE errors were less than 0.75 degrees in rotation and 5.5 cm in translation. For TLS-to-ALS, corresponding errors were less than 0.8 degrees and 8 cm, respectively. These results, show that our ForestAlign method is effective for co-registering both TLS-to-TLS and TLS-to-ALS in such forest environments, without relying on targets, while achieving high performance.</div></div>\",\"PeriodicalId\":101147,\"journal\":{\"name\":\"Science of Remote Sensing\",\"volume\":\"11 \",\"pages\":\"Article 100194\"},\"PeriodicalIF\":5.7000,\"publicationDate\":\"2025-01-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Science of Remote Sensing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666017224000786\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENVIRONMENTAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science of Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666017224000786","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

越来越需要获得从近地表到树冠以上不同尺度的高度详细的异质性森林模型。这为分析、规划和生态系统管理提供了先进的计算工具。通过地面(TLS)和空中(ALS)扫描平台提供的激光雷达传感器,由于能够直接快速收集精确的3D结构信息,已成为森林监测的主要技术。这些平台的选择通常取决于观测或干预研究所需的尺度(树级、地块、区域)。林业现在认识到多尺度方法的好处,利用每个平台的优势,同时最大限度地减少单个来源的不确定性。然而,这些激光雷达源的有效集成在很大程度上依赖于高效的多尺度、多视角协同配准或点云对准方法。在没有gps的地区,林业传统上依赖于基于目标的共同配准方法(例如,反射或标记树木),这在规模上是不切实际的。在这里,我们提出了一种有效的、无目标的、全自动的共配准方法ForestAlign,用于对准从多视图、多尺度激光雷达源收集的森林点云。我们的共配准方法采用增量对齐策略,根据结构复杂性的增加对3D点进行分组和聚合。该策略将3D点从较不复杂的(例如,地面)顺序对齐到更复杂的结构(例如,树干/树枝,树叶),迭代地优化对齐。经验证据表明,该方法在调整TLS-to-TLS和TLS-to-ALS扫描的本地有效性,跨越各种生态系统条件,包括火灾前/后处理效果。在TLS-to-TLS场景下,参数RMSE误差在旋转时小于0.75°,在平移时小于5.5 cm。对于TLS-to-ALS,相应的误差分别小于0.8°和8 cm。这些结果表明,我们的ForestAlign方法可以有效地在这种森林环境中同时注册TLS-to-TLS和TLS-to-ALS,而不依赖于目标,同时实现高性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ForestAlign: Automatic forest structure-based alignment for multi-view TLS and ALS point clouds
Access to highly detailed models of heterogeneous forests, spanning from the near surface to above the tree canopy at varying scales, is increasingly in demand. This enables advanced computational tools for analysis, planning, and ecosystem management. LiDAR sensors, available through terrestrial (TLS) and aerial (ALS) scanning platforms, have become established as primary technologies for forest monitoring due to their capability to rapidly collect precise 3D structural information directly. Selection of these platforms typically depends on the scales (tree-level, plot, regional) required for observational or intervention studies. Forestry now recognizes the benefits of a multi-scale approach, leveraging the strengths of each platform while minimizing individual source uncertainties. However, effective integration of these LiDAR sources relies heavily on efficient multi-scale, multi-view co-registration or point-cloud alignment methods. In GPS-denied areas, forestry has traditionally relied on target-based co-registration methods (e.g., reflective or marked trees), which are impractical at scale. Here, we propose ForestAlign: an effective, target-less, and fully automatic co-registration method for aligning forest point clouds collected from multi-view, multi-scale LiDAR sources. Our co-registration approach employs an incremental alignment strategy, grouping and aggregating 3D points based on increasing levels of structural complexity. This strategy aligns 3D points from less complex (e.g., ground surface) to more complex structures (e.g., tree trunks/branches, foliage) sequentially, refining alignment iteratively. Empirical evidence demonstrates the method’s effectiveness in aligning TLS-to-TLS and TLS-to-ALS scans locally, across various ecosystem conditions, including pre/post fire treatment effects. In TLS-to-TLS scenarios, parameter RMSE errors were less than 0.75 degrees in rotation and 5.5 cm in translation. For TLS-to-ALS, corresponding errors were less than 0.8 degrees and 8 cm, respectively. These results, show that our ForestAlign method is effective for co-registering both TLS-to-TLS and TLS-to-ALS in such forest environments, without relying on targets, while achieving high performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
12.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信