Automated tree crown labeling with 3D radiative transfer modelling achieves human comparable performances for tree segmentation in semi-arid landscapes
Decai Jin , Jianbo Qi , Nathan Borges Gonçalves , Jifan Wei , Huaguo Huang , Yaozhong Pan
{"title":"Automated tree crown labeling with 3D radiative transfer modelling achieves human comparable performances for tree segmentation in semi-arid landscapes","authors":"Decai Jin , Jianbo Qi , Nathan Borges Gonçalves , Jifan Wei , Huaguo Huang , Yaozhong Pan","doi":"10.1016/j.jag.2024.104235","DOIUrl":null,"url":null,"abstract":"<div><div>Mapping tree crowns in arid or semi-arid areas, which cover around one-third of the Earth’s land surface, is a key methodology towards sustainable management of trees. Recent advances in deep learning have shown promising results for tree crown segmentation. However, a large amount of manually labeled data is still required. We here propose a novel method to delineate tree crowns from high resolution satellite imagery using deep learning trained with automatically generated labels from 3D radiative transfer modeling, intending to reduce human annotation significantly. The methodological steps consist of 1) simulating images with a 3D radiative transfer model, 2) image style transfer learning based on generative adversarial network (GAN) and 3) tree crown segmentation using U-net segmentation model. The delineation performances of the proposed method have been evaluated on a manually annotated dataset consisting of more than 40,000 tree crowns. Our approach, which relies solely on synthetic images, demonstrates high segmentation accuracy, with an F1 score exceeding 0.77 and an Intersection over Union (IoU) above 0.64. Particularly, it achieves impressive accuracy in extracting crown areas (r<sup>2</sup> greater than 0.87) and crown densities (r<sup>2</sup> greater than 0.72), comparable to that of a trained dataset with human annotations only. In this study, we demonstrated that the integration of a 3D radiative transfer model and GANs for automatically generating training labels can achieve performances comparable to human labeling, and can significantly reduce the time needed for manual labeling in remote sensing segmentation applications.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"134 ","pages":"Article 104235"},"PeriodicalIF":7.6000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of applied earth observation and geoinformation : ITC journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1569843224005910","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 0
Abstract
Mapping tree crowns in arid or semi-arid areas, which cover around one-third of the Earth’s land surface, is a key methodology towards sustainable management of trees. Recent advances in deep learning have shown promising results for tree crown segmentation. However, a large amount of manually labeled data is still required. We here propose a novel method to delineate tree crowns from high resolution satellite imagery using deep learning trained with automatically generated labels from 3D radiative transfer modeling, intending to reduce human annotation significantly. The methodological steps consist of 1) simulating images with a 3D radiative transfer model, 2) image style transfer learning based on generative adversarial network (GAN) and 3) tree crown segmentation using U-net segmentation model. The delineation performances of the proposed method have been evaluated on a manually annotated dataset consisting of more than 40,000 tree crowns. Our approach, which relies solely on synthetic images, demonstrates high segmentation accuracy, with an F1 score exceeding 0.77 and an Intersection over Union (IoU) above 0.64. Particularly, it achieves impressive accuracy in extracting crown areas (r2 greater than 0.87) and crown densities (r2 greater than 0.72), comparable to that of a trained dataset with human annotations only. In this study, we demonstrated that the integration of a 3D radiative transfer model and GANs for automatically generating training labels can achieve performances comparable to human labeling, and can significantly reduce the time needed for manual labeling in remote sensing segmentation applications.
期刊介绍:
The International Journal of Applied Earth Observation and Geoinformation publishes original papers that utilize earth observation data for natural resource and environmental inventory and management. These data primarily originate from remote sensing platforms, including satellites and aircraft, supplemented by surface and subsurface measurements. Addressing natural resources such as forests, agricultural land, soils, and water, as well as environmental concerns like biodiversity, land degradation, and hazards, the journal explores conceptual and data-driven approaches. It covers geoinformation themes like capturing, databasing, visualization, interpretation, data quality, and spatial uncertainty.