Emilio Luz-Ricca, K. Landolt, B. Pickens, M. Koneff
{"title":"Automating sandhill crane counts from nocturnal thermal aerial imagery using deep learning","authors":"Emilio Luz-Ricca, K. Landolt, B. Pickens, M. Koneff","doi":"10.1002/rse2.301","DOIUrl":null,"url":null,"abstract":"Population monitoring is essential to management and conservation efforts for migratory birds, but traditional low‐altitude aerial surveys with human observers are plagued by individual observer bias and risk to flight crews. Aerial surveys that use remote sensing can reduce bias and risk, but manual counting of wildlife in imagery is laborious and may be cost‐prohibitive. Therefore, automated methods for counting are critical to cost‐efficient application of remote sensing for wildlife surveys covering large areas. We conducted nocturnal surveys of sandhill cranes (Antigone canadensis) during spring migration in the Central Platte River Valley of Nebraska, USA, using midwave thermal infrared sensors. We developed a framework for automated counting of sandhill cranes from thermal imagery using deep learning, assessed and compared the performance of two automated counting models, and quantified the effect of spatial resolution on counting accuracy. Aerial thermal imagery data were collected in March 2018 and 2021; 40 images were analyzed. We applied two deep learning models: an object detection approach, Faster R‐CNN and a recently developed pixel‐density estimation approach, ASPDNet. Model performance was determined using data independent of the training imagery. The effect of spatial resolution was quantified with a beta regression on relative error. Our results showed model accuracy of 9% mean percent error for ASPDNet and 18% for Faster R‐CNN. Most error was related to the undercounting of sandhill cranes. ASPDNet had <50% of the error of Faster R‐CNN as measured by mean percent error, root‐mean‐squared error and mean absolute error. Spatial resolution affected accuracy of both models, with error rate increasing with coarser resolution, particularly with Faster R‐CNN. Deep learning models, particularly pixel‐density estimators, can accurately automate counting of migratory birds in a dense, aggregate setting such as nocturnal roosting sites.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":" ","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Remote Sensing in Ecology and Conservation","FirstCategoryId":"93","ListUrlMain":"https://doi.org/10.1002/rse2.301","RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Population monitoring is essential to management and conservation efforts for migratory birds, but traditional low‐altitude aerial surveys with human observers are plagued by individual observer bias and risk to flight crews. Aerial surveys that use remote sensing can reduce bias and risk, but manual counting of wildlife in imagery is laborious and may be cost‐prohibitive. Therefore, automated methods for counting are critical to cost‐efficient application of remote sensing for wildlife surveys covering large areas. We conducted nocturnal surveys of sandhill cranes (Antigone canadensis) during spring migration in the Central Platte River Valley of Nebraska, USA, using midwave thermal infrared sensors. We developed a framework for automated counting of sandhill cranes from thermal imagery using deep learning, assessed and compared the performance of two automated counting models, and quantified the effect of spatial resolution on counting accuracy. Aerial thermal imagery data were collected in March 2018 and 2021; 40 images were analyzed. We applied two deep learning models: an object detection approach, Faster R‐CNN and a recently developed pixel‐density estimation approach, ASPDNet. Model performance was determined using data independent of the training imagery. The effect of spatial resolution was quantified with a beta regression on relative error. Our results showed model accuracy of 9% mean percent error for ASPDNet and 18% for Faster R‐CNN. Most error was related to the undercounting of sandhill cranes. ASPDNet had <50% of the error of Faster R‐CNN as measured by mean percent error, root‐mean‐squared error and mean absolute error. Spatial resolution affected accuracy of both models, with error rate increasing with coarser resolution, particularly with Faster R‐CNN. Deep learning models, particularly pixel‐density estimators, can accurately automate counting of migratory birds in a dense, aggregate setting such as nocturnal roosting sites.
种群监测对候鸟的管理和保护工作至关重要,但传统的由人类观察员进行的低空空中调查受到观察员个人偏见和机组人员风险的困扰。使用遥感的航空调查可以减少偏差和风险,但人工计算图像中的野生动物是费力的,而且可能成本过高。因此,自动化的计数方法对于在大面积野生动物调查中应用具有成本效益的遥感至关重要。利用中波热红外传感器对美国内布拉斯加州中部普拉特河谷春季迁徙期间的沙丘鹤(Antigone canadensis)进行夜间调查。我们利用深度学习技术开发了一个基于热图像的沙丘鹤自动计数框架,评估和比较了两种自动计数模型的性能,并量化了空间分辨率对计数精度的影响。航空热成像数据收集于2018年3月和2021年3月;分析了40张图像。我们应用了两种深度学习模型:目标检测方法,Faster R - CNN和最近开发的像素密度估计方法ASPDNet。使用独立于训练图像的数据来确定模型的性能。空间分辨率对相对误差的影响通过β回归进行量化。我们的结果表明,ASPDNet模型的平均误差为9%,Faster R - CNN模型的平均误差为18%。大多数误差与漏报沙丘鹤有关。通过平均百分比误差、均方根误差和平均绝对误差来衡量,ASPDNet的误差小于Faster R - CNN的50%。空间分辨率对两种模型的精度都有影响,随着分辨率的增加,错误率也在增加,尤其是使用更快的R - CNN。深度学习模型,特别是像素密度估计器,可以准确地自动计数密集聚集环境中的候鸟,如夜间栖息地点。
期刊介绍:
emote Sensing in Ecology and Conservation provides a forum for rapid, peer-reviewed publication of novel, multidisciplinary research at the interface between remote sensing science and ecology and conservation. The journal prioritizes findings that advance the scientific basis of ecology and conservation, promoting the development of remote-sensing based methods relevant to the management of land use and biological systems at all levels, from populations and species to ecosystems and biomes. The journal defines remote sensing in its broadest sense, including data acquisition by hand-held and fixed ground-based sensors, such as camera traps and acoustic recorders, and sensors on airplanes and satellites. The intended journal’s audience includes ecologists, conservation scientists, policy makers, managers of terrestrial and aquatic systems, remote sensing scientists, and students.
Remote Sensing in Ecology and Conservation is a fully open access journal from Wiley and the Zoological Society of London. Remote sensing has enormous potential as to provide information on the state of, and pressures on, biological diversity and ecosystem services, at multiple spatial and temporal scales. This new publication provides a forum for multidisciplinary research in remote sensing science, ecological research and conservation science.