Michael J. Campbell , Jessie F. Eastburn , Simon C. Brewer , Philip E. Dennison
{"title":"深度学习和机器学习可以实现大尺度林地高度、覆盖度和生物量的测绘","authors":"Michael J. Campbell , Jessie F. Eastburn , Simon C. Brewer , Philip E. Dennison","doi":"10.1016/j.isprsjprs.2025.05.016","DOIUrl":null,"url":null,"abstract":"<div><div>Accurate, spatially explicit quantification of vegetation structure in drylands can improve our understanding of the important role that these critical ecosystems play in the Earth system. In semiarid woodland settings, remote sensing of vegetation structure is challenging due to low tree height, cover, and greenness as well as limited spatial and temporal availability of airborne lidar data. These limitations have hindered the development of remote sensing applications in globally widespread and ecologically important dryland systems. In this study, we implement a U-Net convolutional neural network capable of predicting per-pixel, lidar-derived vegetation height in piñon-juniper woodlands using widely available, high-resolution aerial imagery. We used this imagery and modeled canopy height data to construct random forest models for predicting tree density, canopy cover, and live aboveground biomass. Trained and validated on a field dataset that spanned diverse portions of the vast range of piñon-juniper woodlands in the southwestern US, our models demonstrated high performance according to both variance explained (R<sup>2</sup><sub>density</sub> = 0.45; R<sup>2</sup><sub>cover</sub> = 0.80; R<sup>2</sup><sub>biomass</sub> = 0.61) and predictive error (%RMSE<sub>density</sub> = 57; %RMSE<sub>cover</sub> = 19; %RMSE<sub>biomass</sub> = 42). A comparative analysis revealed that, while performance was somewhat lower than models driven solely by airborne lidar, they vastly exceeded that of models driven by aerial imagery alone or a combination of Landsat, topography, and climate data. Although the structural predictive maps featured some artifacts from illumination and perspective differences inherent to aerial imagery, this workflow represents a viable pathway for spatially exhaustive and temporally consistent vegetation structure mapping in piñon-juniper and other dry woodland ecosystems.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"226 ","pages":"Pages 187-203"},"PeriodicalIF":10.6000,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep learning and machine learning enable broad-scale woodland height, cover, and biomass mapping\",\"authors\":\"Michael J. Campbell , Jessie F. Eastburn , Simon C. Brewer , Philip E. Dennison\",\"doi\":\"10.1016/j.isprsjprs.2025.05.016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Accurate, spatially explicit quantification of vegetation structure in drylands can improve our understanding of the important role that these critical ecosystems play in the Earth system. In semiarid woodland settings, remote sensing of vegetation structure is challenging due to low tree height, cover, and greenness as well as limited spatial and temporal availability of airborne lidar data. These limitations have hindered the development of remote sensing applications in globally widespread and ecologically important dryland systems. In this study, we implement a U-Net convolutional neural network capable of predicting per-pixel, lidar-derived vegetation height in piñon-juniper woodlands using widely available, high-resolution aerial imagery. We used this imagery and modeled canopy height data to construct random forest models for predicting tree density, canopy cover, and live aboveground biomass. Trained and validated on a field dataset that spanned diverse portions of the vast range of piñon-juniper woodlands in the southwestern US, our models demonstrated high performance according to both variance explained (R<sup>2</sup><sub>density</sub> = 0.45; R<sup>2</sup><sub>cover</sub> = 0.80; R<sup>2</sup><sub>biomass</sub> = 0.61) and predictive error (%RMSE<sub>density</sub> = 57; %RMSE<sub>cover</sub> = 19; %RMSE<sub>biomass</sub> = 42). A comparative analysis revealed that, while performance was somewhat lower than models driven solely by airborne lidar, they vastly exceeded that of models driven by aerial imagery alone or a combination of Landsat, topography, and climate data. Although the structural predictive maps featured some artifacts from illumination and perspective differences inherent to aerial imagery, this workflow represents a viable pathway for spatially exhaustive and temporally consistent vegetation structure mapping in piñon-juniper and other dry woodland ecosystems.</div></div>\",\"PeriodicalId\":50269,\"journal\":{\"name\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"volume\":\"226 \",\"pages\":\"Pages 187-203\"},\"PeriodicalIF\":10.6000,\"publicationDate\":\"2025-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0924271625001972\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271625001972","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
Deep learning and machine learning enable broad-scale woodland height, cover, and biomass mapping
Accurate, spatially explicit quantification of vegetation structure in drylands can improve our understanding of the important role that these critical ecosystems play in the Earth system. In semiarid woodland settings, remote sensing of vegetation structure is challenging due to low tree height, cover, and greenness as well as limited spatial and temporal availability of airborne lidar data. These limitations have hindered the development of remote sensing applications in globally widespread and ecologically important dryland systems. In this study, we implement a U-Net convolutional neural network capable of predicting per-pixel, lidar-derived vegetation height in piñon-juniper woodlands using widely available, high-resolution aerial imagery. We used this imagery and modeled canopy height data to construct random forest models for predicting tree density, canopy cover, and live aboveground biomass. Trained and validated on a field dataset that spanned diverse portions of the vast range of piñon-juniper woodlands in the southwestern US, our models demonstrated high performance according to both variance explained (R2density = 0.45; R2cover = 0.80; R2biomass = 0.61) and predictive error (%RMSEdensity = 57; %RMSEcover = 19; %RMSEbiomass = 42). A comparative analysis revealed that, while performance was somewhat lower than models driven solely by airborne lidar, they vastly exceeded that of models driven by aerial imagery alone or a combination of Landsat, topography, and climate data. Although the structural predictive maps featured some artifacts from illumination and perspective differences inherent to aerial imagery, this workflow represents a viable pathway for spatially exhaustive and temporally consistent vegetation structure mapping in piñon-juniper and other dry woodland ecosystems.
期刊介绍:
The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive.
P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields.
In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.