Ryan C. Blackburn, Robert Buscaglia, Andrew J. Sánchez Meador, Margaret M. Moore, Temuulen Sankey, Steven E. Sesnie
{"title":"Eigenfeature‐enhanced deep learning: advancing tree species classification in mixed conifer forests with lidar","authors":"Ryan C. Blackburn, Robert Buscaglia, Andrew J. Sánchez Meador, Margaret M. Moore, Temuulen Sankey, Steven E. Sesnie","doi":"10.1002/rse2.70014","DOIUrl":null,"url":null,"abstract":"Accurately classifying tree species using remotely sensed data remains a significant challenge, yet it is essential for forest monitoring and understanding ecosystem dynamics over large spatial extents. While light detection and ranging (lidar) has shown promise for species classification, its accuracy typically decreases in complex forests or with lower lidar point densities. Recent advancements in lidar processing and machine learning offer new opportunities to leverage previously unavailable structural information. In this study, we present an automated machine learning pipeline that reduces practitioner burden by utilizing canonical deep learning and improved input layers through the derivation of eigenfeatures. These eigenfeatures were used as inputs for a 2D convolutional neural network (CNN) to classify seven tree species in the Mogollon Rim Ranger District of the Coconino National Forest, AZ, US. We compared eigenfeature images derived from unoccupied aerial vehicle laser scanning (UAV‐LS) and airborne laser scanning (ALS) individual tree segmentation algorithms against raw intensity and colorless control images. Remarkably, mean overall accuracies for classifying seven species reached 94.8% for ALS and 93.4% for UAV‐LS. White image types underperformed for both ALS and UAV‐LS compared to eigenfeature images, while ALS and UAV‐LS image types showed marginal differences in model performance. These results demonstrate that lower point density ALS data can achieve high classification accuracy when paired with eigenfeatures in an automated pipeline. This study advances the field by addressing species classification at scales ranging from individual trees to landscapes, offering a scalable and efficient approach for understanding tree composition in complex forests.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"47 1","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Remote Sensing in Ecology and Conservation","FirstCategoryId":"93","ListUrlMain":"https://doi.org/10.1002/rse2.70014","RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Accurately classifying tree species using remotely sensed data remains a significant challenge, yet it is essential for forest monitoring and understanding ecosystem dynamics over large spatial extents. While light detection and ranging (lidar) has shown promise for species classification, its accuracy typically decreases in complex forests or with lower lidar point densities. Recent advancements in lidar processing and machine learning offer new opportunities to leverage previously unavailable structural information. In this study, we present an automated machine learning pipeline that reduces practitioner burden by utilizing canonical deep learning and improved input layers through the derivation of eigenfeatures. These eigenfeatures were used as inputs for a 2D convolutional neural network (CNN) to classify seven tree species in the Mogollon Rim Ranger District of the Coconino National Forest, AZ, US. We compared eigenfeature images derived from unoccupied aerial vehicle laser scanning (UAV‐LS) and airborne laser scanning (ALS) individual tree segmentation algorithms against raw intensity and colorless control images. Remarkably, mean overall accuracies for classifying seven species reached 94.8% for ALS and 93.4% for UAV‐LS. White image types underperformed for both ALS and UAV‐LS compared to eigenfeature images, while ALS and UAV‐LS image types showed marginal differences in model performance. These results demonstrate that lower point density ALS data can achieve high classification accuracy when paired with eigenfeatures in an automated pipeline. This study advances the field by addressing species classification at scales ranging from individual trees to landscapes, offering a scalable and efficient approach for understanding tree composition in complex forests.
期刊介绍:
emote Sensing in Ecology and Conservation provides a forum for rapid, peer-reviewed publication of novel, multidisciplinary research at the interface between remote sensing science and ecology and conservation. The journal prioritizes findings that advance the scientific basis of ecology and conservation, promoting the development of remote-sensing based methods relevant to the management of land use and biological systems at all levels, from populations and species to ecosystems and biomes. The journal defines remote sensing in its broadest sense, including data acquisition by hand-held and fixed ground-based sensors, such as camera traps and acoustic recorders, and sensors on airplanes and satellites. The intended journal’s audience includes ecologists, conservation scientists, policy makers, managers of terrestrial and aquatic systems, remote sensing scientists, and students.
Remote Sensing in Ecology and Conservation is a fully open access journal from Wiley and the Zoological Society of London. Remote sensing has enormous potential as to provide information on the state of, and pressures on, biological diversity and ecosystem services, at multiple spatial and temporal scales. This new publication provides a forum for multidisciplinary research in remote sensing science, ecological research and conservation science.