Transfer learning from citizen science photographs enables plant species identification in UAV imagery

Salim Soltani , Hannes Feilhauer , Robbert Duker , Teja Kattenborn
{"title":"Transfer learning from citizen science photographs enables plant species identification in UAV imagery","authors":"Salim Soltani ,&nbsp;Hannes Feilhauer ,&nbsp;Robbert Duker ,&nbsp;Teja Kattenborn","doi":"10.1016/j.ophoto.2022.100016","DOIUrl":null,"url":null,"abstract":"<div><p>Accurate information on the spatial distribution of plant species and communities is in high demand for various fields of application, such as nature conservation, forestry, and agriculture. A series of studies has shown that Convolutional Neural Networks (CNNs) accurately predict plant species and communities in high-resolution remote sensing data, in particular with data at the centimeter scale acquired with Unoccupied Aerial Vehicles (UAV). However, such tasks often require ample training data, which is commonly generated in the field via geocoded in-situ observations or labeling remote sensing data through visual interpretation. Both approaches are laborious and can present a critical bottleneck for CNN applications. An alternative source of training data is given by using knowledge on the appearance of plants in the form of plant photographs from citizen science projects such as the iNaturalist database. Such crowd-sourced plant photographs typically exhibit very different perspectives and great heterogeneity in various aspects, yet the sheer volume of data could reveal great potential for application to bird’s eye views from remote sensing platforms. Here, we explore the potential of transfer learning from such a crowd-sourced data treasure to the remote sensing context. Therefore, we investigate firstly, if we can use crowd-sourced plant photographs for CNN training and subsequent mapping of plant species in high-resolution remote sensing imagery. Secondly, we test if the predictive performance can be increased by a priori selecting photographs that share a more similar perspective to the remote sensing data. We used two case studies to test our proposed approach with multiple RGB orthoimages acquired from UAV with the target plant species <em>Fallopia japonica</em> and <em>Portulacaria afra</em> respectively. Our results demonstrate that CNN models trained with heterogeneous, crowd-sourced plant photographs can indeed predict the target species in UAV orthoimages with surprising accuracy. Filtering the crowd-sourced photographs used for training by acquisition properties increased the predictive performance. This study demonstrates that citizen science data can effectively anticipate a common bottleneck for vegetation assessments and provides an example on how we can effectively harness the ever-increasing availability of crowd-sourced and big data for remote sensing applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"5 ","pages":"Article 100016"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000059/pdfft?md5=75907267adbd64f9e59415290458683d&pid=1-s2.0-S2667393222000059-main.pdf","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Open Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667393222000059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Accurate information on the spatial distribution of plant species and communities is in high demand for various fields of application, such as nature conservation, forestry, and agriculture. A series of studies has shown that Convolutional Neural Networks (CNNs) accurately predict plant species and communities in high-resolution remote sensing data, in particular with data at the centimeter scale acquired with Unoccupied Aerial Vehicles (UAV). However, such tasks often require ample training data, which is commonly generated in the field via geocoded in-situ observations or labeling remote sensing data through visual interpretation. Both approaches are laborious and can present a critical bottleneck for CNN applications. An alternative source of training data is given by using knowledge on the appearance of plants in the form of plant photographs from citizen science projects such as the iNaturalist database. Such crowd-sourced plant photographs typically exhibit very different perspectives and great heterogeneity in various aspects, yet the sheer volume of data could reveal great potential for application to bird’s eye views from remote sensing platforms. Here, we explore the potential of transfer learning from such a crowd-sourced data treasure to the remote sensing context. Therefore, we investigate firstly, if we can use crowd-sourced plant photographs for CNN training and subsequent mapping of plant species in high-resolution remote sensing imagery. Secondly, we test if the predictive performance can be increased by a priori selecting photographs that share a more similar perspective to the remote sensing data. We used two case studies to test our proposed approach with multiple RGB orthoimages acquired from UAV with the target plant species Fallopia japonica and Portulacaria afra respectively. Our results demonstrate that CNN models trained with heterogeneous, crowd-sourced plant photographs can indeed predict the target species in UAV orthoimages with surprising accuracy. Filtering the crowd-sourced photographs used for training by acquisition properties increased the predictive performance. This study demonstrates that citizen science data can effectively anticipate a common bottleneck for vegetation assessments and provides an example on how we can effectively harness the ever-increasing availability of crowd-sourced and big data for remote sensing applications.

公民科学照片的迁移学习使无人机图像中的植物物种识别成为可能
在自然保护、林业和农业等应用领域,对植物物种和群落空间分布的准确信息有着很高的需求。一系列研究表明,卷积神经网络(cnn)可以准确预测高分辨率遥感数据中的植物物种和群落,特别是在无人机(UAV)获取的厘米尺度数据中。然而,这类任务往往需要充足的训练数据,这些数据通常是在实地通过地理编码的现场观测或通过目视解译标记遥感数据产生的。这两种方法都很费力,并且可能成为CNN应用的关键瓶颈。训练数据的另一种来源是使用来自公民科学项目(如iNaturalist数据库)的植物照片形式的植物外观知识。这类众包植物照片通常在各个方面表现出非常不同的视角和巨大的异质性,但庞大的数据量可能显示出应用于遥感平台鸟瞰图的巨大潜力。在这里,我们探讨了从这样一个众包数据宝库转移学习到遥感环境的潜力。因此,我们首先研究是否可以使用众包植物照片在高分辨率遥感图像中进行CNN训练和随后的植物物种制图。其次,我们测试了是否可以通过先验地选择与遥感数据具有更相似视角的照片来提高预测性能。我们使用了两个案例来测试我们提出的方法,分别使用从无人机获取的目标植物物种Fallopia japonica和Portulacaria afra的多个RGB正射影图像。我们的研究结果表明,用异构的、众包的植物照片训练的CNN模型确实可以以惊人的精度预测无人机正射平像中的目标物种。通过采集属性过滤用于训练的众包照片提高了预测性能。该研究表明,公民科学数据可以有效地预测植被评估的常见瓶颈,并为我们如何有效利用日益增长的众包和大数据用于遥感应用提供了一个例子。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.10
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信