Eigenfeature‐enhanced deep learning: advancing tree species classification in mixed conifer forests with lidar

IF 3.9 2区 环境科学与生态学 Q1 ECOLOGY
Ryan C. Blackburn, Robert Buscaglia, Andrew J. Sánchez Meador, Margaret M. Moore, Temuulen Sankey, Steven E. Sesnie
{"title":"Eigenfeature‐enhanced deep learning: advancing tree species classification in mixed conifer forests with lidar","authors":"Ryan C. Blackburn, Robert Buscaglia, Andrew J. Sánchez Meador, Margaret M. Moore, Temuulen Sankey, Steven E. Sesnie","doi":"10.1002/rse2.70014","DOIUrl":null,"url":null,"abstract":"Accurately classifying tree species using remotely sensed data remains a significant challenge, yet it is essential for forest monitoring and understanding ecosystem dynamics over large spatial extents. While light detection and ranging (lidar) has shown promise for species classification, its accuracy typically decreases in complex forests or with lower lidar point densities. Recent advancements in lidar processing and machine learning offer new opportunities to leverage previously unavailable structural information. In this study, we present an automated machine learning pipeline that reduces practitioner burden by utilizing canonical deep learning and improved input layers through the derivation of eigenfeatures. These eigenfeatures were used as inputs for a 2D convolutional neural network (CNN) to classify seven tree species in the Mogollon Rim Ranger District of the Coconino National Forest, AZ, US. We compared eigenfeature images derived from unoccupied aerial vehicle laser scanning (UAV‐LS) and airborne laser scanning (ALS) individual tree segmentation algorithms against raw intensity and colorless control images. Remarkably, mean overall accuracies for classifying seven species reached 94.8% for ALS and 93.4% for UAV‐LS. White image types underperformed for both ALS and UAV‐LS compared to eigenfeature images, while ALS and UAV‐LS image types showed marginal differences in model performance. These results demonstrate that lower point density ALS data can achieve high classification accuracy when paired with eigenfeatures in an automated pipeline. This study advances the field by addressing species classification at scales ranging from individual trees to landscapes, offering a scalable and efficient approach for understanding tree composition in complex forests.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"47 1","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Remote Sensing in Ecology and Conservation","FirstCategoryId":"93","ListUrlMain":"https://doi.org/10.1002/rse2.70014","RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Accurately classifying tree species using remotely sensed data remains a significant challenge, yet it is essential for forest monitoring and understanding ecosystem dynamics over large spatial extents. While light detection and ranging (lidar) has shown promise for species classification, its accuracy typically decreases in complex forests or with lower lidar point densities. Recent advancements in lidar processing and machine learning offer new opportunities to leverage previously unavailable structural information. In this study, we present an automated machine learning pipeline that reduces practitioner burden by utilizing canonical deep learning and improved input layers through the derivation of eigenfeatures. These eigenfeatures were used as inputs for a 2D convolutional neural network (CNN) to classify seven tree species in the Mogollon Rim Ranger District of the Coconino National Forest, AZ, US. We compared eigenfeature images derived from unoccupied aerial vehicle laser scanning (UAV‐LS) and airborne laser scanning (ALS) individual tree segmentation algorithms against raw intensity and colorless control images. Remarkably, mean overall accuracies for classifying seven species reached 94.8% for ALS and 93.4% for UAV‐LS. White image types underperformed for both ALS and UAV‐LS compared to eigenfeature images, while ALS and UAV‐LS image types showed marginal differences in model performance. These results demonstrate that lower point density ALS data can achieve high classification accuracy when paired with eigenfeatures in an automated pipeline. This study advances the field by addressing species classification at scales ranging from individual trees to landscapes, offering a scalable and efficient approach for understanding tree composition in complex forests.
特征增强深度学习:利用激光雷达推进混交林树种分类
利用遥感数据对树种进行准确分类仍然是一个重大挑战,但它对于森林监测和了解大空间范围内的生态系统动态至关重要。虽然光探测和测距(激光雷达)已经显示出物种分类的前景,但在复杂的森林或激光雷达点密度较低的情况下,其准确性通常会降低。激光雷达处理和机器学习的最新进展为利用以前无法获得的结构信息提供了新的机会。在这项研究中,我们提出了一个自动化的机器学习管道,通过使用规范深度学习和通过推导特征来改进输入层,从而减少了从业者的负担。这些特征被用作二维卷积神经网络(CNN)的输入,用于对美国亚利桑那州科科尼诺国家森林Mogollon Rim Ranger区的七种树种进行分类。我们将无人驾驶飞行器激光扫描(UAV - LS)和机载激光扫描(ALS)单树分割算法获得的特征图像与原始强度和无色控制图像进行了比较。值得注意的是,ALS和UAV - LS对7个物种分类的平均总体准确率分别达到94.8%和93.4%。与特征图像相比,白色图像类型在ALS和UAV - LS模型中的表现都较差,而ALS和UAV - LS图像类型在模型性能上存在微小差异。结果表明,低点密度的ALS数据与特征特征在自动管道中配对时可以获得较高的分类精度。本研究通过解决从单个树木到景观的尺度上的物种分类,为了解复杂森林中的树木组成提供了一种可扩展和有效的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Remote Sensing in Ecology and Conservation
Remote Sensing in Ecology and Conservation Earth and Planetary Sciences-Computers in Earth Sciences
CiteScore
9.80
自引率
5.50%
发文量
69
审稿时长
18 weeks
期刊介绍: emote Sensing in Ecology and Conservation provides a forum for rapid, peer-reviewed publication of novel, multidisciplinary research at the interface between remote sensing science and ecology and conservation. The journal prioritizes findings that advance the scientific basis of ecology and conservation, promoting the development of remote-sensing based methods relevant to the management of land use and biological systems at all levels, from populations and species to ecosystems and biomes. The journal defines remote sensing in its broadest sense, including data acquisition by hand-held and fixed ground-based sensors, such as camera traps and acoustic recorders, and sensors on airplanes and satellites. The intended journal’s audience includes ecologists, conservation scientists, policy makers, managers of terrestrial and aquatic systems, remote sensing scientists, and students. Remote Sensing in Ecology and Conservation is a fully open access journal from Wiley and the Zoological Society of London. Remote sensing has enormous potential as to provide information on the state of, and pressures on, biological diversity and ecosystem services, at multiple spatial and temporal scales. This new publication provides a forum for multidisciplinary research in remote sensing science, ecological research and conservation science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信