Automated visitor and wildlife monitoring with camera traps and machine learning

IF 3.9 2区 环境科学与生态学 Q1 ECOLOGY
Veronika Mitterwallner, A. Peters, Hendrik Edelhoff, Gregor H. Mathes, Hien Nguyen, W. Peters, M. Heurich, M. Steinbauer
{"title":"Automated visitor and wildlife monitoring with camera traps and machine learning","authors":"Veronika Mitterwallner, A. Peters, Hendrik Edelhoff, Gregor H. Mathes, Hien Nguyen, W. Peters, M. Heurich, M. Steinbauer","doi":"10.1002/rse2.367","DOIUrl":null,"url":null,"abstract":"As human activities in natural areas increase, understanding human–wildlife interactions is crucial. Big data approaches, like large‐scale camera trap studies, are becoming more relevant for studying these interactions. In addition, open‐source object detection models are rapidly improving and have great potential to enhance the image processing of camera trap data from human and wildlife activities. In this study, we evaluate the performance of the open‐source object detection model MegaDetector in cross‐regional monitoring using camera traps. The performance at detecting and counting humans, animals and vehicles is evaluated by comparing the detection results with manual classifications of more than 300 000 camera trap images from three study regions. Moreover, we investigate structural patterns of misclassification and evaluate the results of the detection model for typical temporal analyses conducted in ecological research. Overall, the accuracy of the detection model was very high with 96.0% accuracy for animals, 93.8% for persons and 99.3% for vehicles. Results reveal systematic patterns in misclassifications that can be automatically identified and removed. In addition, we show that the detection model can be readily used to count people and animals on images with underestimating persons by −0.05, vehicles by −0.01 and animals by −0.01 counts per image. Most importantly, the temporal pattern in a long‐term time series of manually classified human and wildlife activities was highly correlated with classification results of the detection model (Pearson's r = 0.996, p < 0.001) and diurnal kernel densities of activities were almost equivalent for manual and automated classification. The results thus prove the overall applicability of the detection model in the image classification process of cross‐regional camera trap studies without further manual intervention. Besides the great acceleration in processing speed, the model is also suitable for long‐term monitoring and allows reproducibility in scientific studies while complying with privacy regulations.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":null,"pages":null},"PeriodicalIF":3.9000,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Remote Sensing in Ecology and Conservation","FirstCategoryId":"93","ListUrlMain":"https://doi.org/10.1002/rse2.367","RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

As human activities in natural areas increase, understanding human–wildlife interactions is crucial. Big data approaches, like large‐scale camera trap studies, are becoming more relevant for studying these interactions. In addition, open‐source object detection models are rapidly improving and have great potential to enhance the image processing of camera trap data from human and wildlife activities. In this study, we evaluate the performance of the open‐source object detection model MegaDetector in cross‐regional monitoring using camera traps. The performance at detecting and counting humans, animals and vehicles is evaluated by comparing the detection results with manual classifications of more than 300 000 camera trap images from three study regions. Moreover, we investigate structural patterns of misclassification and evaluate the results of the detection model for typical temporal analyses conducted in ecological research. Overall, the accuracy of the detection model was very high with 96.0% accuracy for animals, 93.8% for persons and 99.3% for vehicles. Results reveal systematic patterns in misclassifications that can be automatically identified and removed. In addition, we show that the detection model can be readily used to count people and animals on images with underestimating persons by −0.05, vehicles by −0.01 and animals by −0.01 counts per image. Most importantly, the temporal pattern in a long‐term time series of manually classified human and wildlife activities was highly correlated with classification results of the detection model (Pearson's r = 0.996, p < 0.001) and diurnal kernel densities of activities were almost equivalent for manual and automated classification. The results thus prove the overall applicability of the detection model in the image classification process of cross‐regional camera trap studies without further manual intervention. Besides the great acceleration in processing speed, the model is also suitable for long‐term monitoring and allows reproducibility in scientific studies while complying with privacy regulations.
通过相机陷阱和机器学习自动监控游客和野生动物
随着人类在自然区域活动的增加,了解人类与野生动物的相互作用至关重要。大数据方法,如大规模相机陷阱研究,正变得越来越适用于研究这些相互作用。此外,开源目标检测模型正在迅速改进,并且在增强来自人类和野生动物活动的相机陷阱数据的图像处理方面具有巨大的潜力。在本研究中,我们评估了开源目标检测模型MegaDetector在使用相机陷阱进行跨区域监控中的性能。通过将检测结果与来自三个研究区域的30多万张相机陷阱图像的人工分类结果进行比较,评估了该方法在检测和计数人类、动物和车辆方面的性能。此外,我们还研究了错误分类的结构模式,并评估了在生态研究中进行的典型时间分析的检测模型的结果。总体而言,该检测模型的准确率非常高,对动物的准确率为96.0%,对人的准确率为93.8%,对车辆的准确率为99.3%。结果揭示了错误分类的系统模式,可以自动识别和删除。此外,我们还表明,该检测模型可以很容易地用于对图像上的人和动物进行计数,每个图像低估了- 0.05个人,- 0.01个车辆和- 0.01个动物。最重要的是,人工分类的人类和野生动物活动的长期时间序列的时间格局与检测模型的分类结果高度相关(Pearson’s r = 0.996, p < 0.001),并且活动的日核密度在人工和自动分类中几乎相等。结果证明了该检测模型在跨区域相机陷阱研究的图像分类过程中的整体适用性,无需进一步的人工干预。除了处理速度大大加快外,该模型还适用于长期监测,并在遵守隐私法规的同时允许科学研究的可重复性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Remote Sensing in Ecology and Conservation
Remote Sensing in Ecology and Conservation Earth and Planetary Sciences-Computers in Earth Sciences
CiteScore
9.80
自引率
5.50%
发文量
69
审稿时长
18 weeks
期刊介绍: emote Sensing in Ecology and Conservation provides a forum for rapid, peer-reviewed publication of novel, multidisciplinary research at the interface between remote sensing science and ecology and conservation. The journal prioritizes findings that advance the scientific basis of ecology and conservation, promoting the development of remote-sensing based methods relevant to the management of land use and biological systems at all levels, from populations and species to ecosystems and biomes. The journal defines remote sensing in its broadest sense, including data acquisition by hand-held and fixed ground-based sensors, such as camera traps and acoustic recorders, and sensors on airplanes and satellites. The intended journal’s audience includes ecologists, conservation scientists, policy makers, managers of terrestrial and aquatic systems, remote sensing scientists, and students. Remote Sensing in Ecology and Conservation is a fully open access journal from Wiley and the Zoological Society of London. Remote sensing has enormous potential as to provide information on the state of, and pressures on, biological diversity and ecosystem services, at multiple spatial and temporal scales. This new publication provides a forum for multidisciplinary research in remote sensing science, ecological research and conservation science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信