AI-based estimation of forest plant community composition from UAV imagery

IF 7.3 2区 环境科学与生态学 Q1 ECOLOGY
Lindo Nepi , Giacomo Quattrini , Simone Pesaresi , Adriano Mancini , Roberto Pierdicca
{"title":"AI-based estimation of forest plant community composition from UAV imagery","authors":"Lindo Nepi ,&nbsp;Giacomo Quattrini ,&nbsp;Simone Pesaresi ,&nbsp;Adriano Mancini ,&nbsp;Roberto Pierdicca","doi":"10.1016/j.ecoinf.2025.103199","DOIUrl":null,"url":null,"abstract":"<div><div>The spatial distribution and abundance of plant species are of critical importance for the identification of plant communities, the assessment of biodiversity, and the fulfilment of environmental policy requirements, such as those outlined in the Habitat Directive 92/43/EEC. Recent advancement in high-resolution drone imaging provides new opportunities for the identification of plant species, offering significant advantages over traditional expert-based methods, which, while accurate, are often time-consuming. This study utilizes deep learning models, namely Vision Transformer (VIT-B16 and VIT-H14) and Convolutional Neural Networks (VGG19 and Resnet101), to quantify the abundance of tree species from RGB images captured by drones in multiple areas of central Italy. The images were segmented into 256 × 256-pixel tiles to enable efficient computational analysis. Following a rigorous training and evaluation process, the ViT-H14 model was identified as the most effective approach, demonstrating an accuracy of over 0.93. The model’s efficacy was substantiated through a comparison with manual analyses conducted by botanical experts, utilising the Mantel Test. This analysis revealed a strong correlation (r =0.87), substantiating the model’s capacity to interpret forest images with a high degree of accuracy. These findings demonstrate the potential of deep learning models, particularly ViT-B16 and VIT-H14, for efficient and scalable ecological monitoring and biodiversity assessments.</div></div>","PeriodicalId":51024,"journal":{"name":"Ecological Informatics","volume":"90 ","pages":"Article 103199"},"PeriodicalIF":7.3000,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ecological Informatics","FirstCategoryId":"93","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1574954125002080","RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

The spatial distribution and abundance of plant species are of critical importance for the identification of plant communities, the assessment of biodiversity, and the fulfilment of environmental policy requirements, such as those outlined in the Habitat Directive 92/43/EEC. Recent advancement in high-resolution drone imaging provides new opportunities for the identification of plant species, offering significant advantages over traditional expert-based methods, which, while accurate, are often time-consuming. This study utilizes deep learning models, namely Vision Transformer (VIT-B16 and VIT-H14) and Convolutional Neural Networks (VGG19 and Resnet101), to quantify the abundance of tree species from RGB images captured by drones in multiple areas of central Italy. The images were segmented into 256 × 256-pixel tiles to enable efficient computational analysis. Following a rigorous training and evaluation process, the ViT-H14 model was identified as the most effective approach, demonstrating an accuracy of over 0.93. The model’s efficacy was substantiated through a comparison with manual analyses conducted by botanical experts, utilising the Mantel Test. This analysis revealed a strong correlation (r =0.87), substantiating the model’s capacity to interpret forest images with a high degree of accuracy. These findings demonstrate the potential of deep learning models, particularly ViT-B16 and VIT-H14, for efficient and scalable ecological monitoring and biodiversity assessments.

Abstract Image

基于人工智能的无人机影像森林植物群落组成估算
植物物种的空间分布和丰度对植物群落的识别、生物多样性的评估和环境政策要求的实现至关重要,例如生境指令92/43/EEC所概述的环境政策要求。高分辨率无人机成像的最新进展为植物物种的识别提供了新的机会,与传统的基于专家的方法相比具有显着优势,传统的方法虽然准确,但往往耗时。本研究利用深度学习模型,即视觉转换器(viti - b16和viti - h14)和卷积神经网络(VGG19和Resnet101),从无人机在意大利中部多个地区捕获的RGB图像中量化树种的丰富度。图像被分割成256 × 256像素的块,以便进行有效的计算分析。经过严格的培训和评估过程,viti - h14模型被确定为最有效的方法,显示出超过0.93的准确性。通过与植物学专家使用曼特尔测试进行的人工分析进行比较,证实了该模型的功效。该分析显示了很强的相关性(r =0.87),证实了该模型具有高度准确地解释森林图像的能力。这些发现证明了深度学习模型,特别是viti - b16和viti - h14在高效和可扩展的生态监测和生物多样性评估方面的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Ecological Informatics
Ecological Informatics 环境科学-生态学
CiteScore
8.30
自引率
11.80%
发文量
346
审稿时长
46 days
期刊介绍: The journal Ecological Informatics is devoted to the publication of high quality, peer-reviewed articles on all aspects of computational ecology, data science and biogeography. The scope of the journal takes into account the data-intensive nature of ecology, the growing capacity of information technology to access, harness and leverage complex data as well as the critical need for informing sustainable management in view of global environmental and climate change. The nature of the journal is interdisciplinary at the crossover between ecology and informatics. It focuses on novel concepts and techniques for image- and genome-based monitoring and interpretation, sensor- and multimedia-based data acquisition, internet-based data archiving and sharing, data assimilation, modelling and prediction of ecological data.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信