DuckNet: an open‐source deep learning tool for waterfowl species identification in UAV imagery

IF 4.3 2区 环境科学与生态学 Q1 ECOLOGY
Zack Loken, Kevin M. Ringelman, Anne Mini, J. Dale James, Mike Mitchell
{"title":"DuckNet: an open‐source deep learning tool for waterfowl species identification in UAV imagery","authors":"Zack Loken, Kevin M. Ringelman, Anne Mini, J. Dale James, Mike Mitchell","doi":"10.1002/rse2.70028","DOIUrl":null,"url":null,"abstract":"Understanding how waterfowl respond to habitat restoration and management activities is crucial for evaluating and refining conservation delivery programs. However, site‐specific waterfowl monitoring is challenging, especially in heavily forested systems such as the Mississippi Alluvial Valley (MAV)—a primary wintering region for waterfowl in North America. We hypothesized that using uncrewed aerial vehicles (UAVs) coupled with deep learning‐based methods for object detection would provide an efficient and effective means for surveying non‐breeding waterfowl on difficult‐to‐access restored wetland sites. Accordingly, during the winters of 2021 and 2022, we surveyed wetland restoration easements in the MAV using a UAV equipped with a dual thermal‐RGB high‐resolution sensor to collect 2360 digital images of non‐breeding waterfowl. We then developed, optimized, and trained a RetinaNet object detection model with a ResNet‐50 backbone to locate and identify seven species of waterfowl drakes, waterfowl hens, and one species of waterbird in the UAV imagery. The final model achieved an average precision and average recall of 88.1% (class ranges from 68.8 to 99.6%) and 89.0% (class ranges from 70.0 to 100%), respectively, at an intersection‐over‐union of 0.5. This study successfully surveys non‐breeding waterfowl in structurally complex and difficult‐to‐access habitats using UAV and, furthermore, provides a functional, open‐source, deep learning‐based object detection framework (DuckNet) for automated detection of waterfowl in UAV imagery. DuckNet provides a user‐friendly interface for running inference on custom images using the model developed here and, additionally, allows users to fine‐tune the model on custom datasets to expand the number of species classes the model can detect. This framework provides managers with an efficient and cost‐effective means to count waterfowl on project sites, thereby improving their capacity to evaluate waterfowl response to wetland restoration efforts.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"78 1","pages":""},"PeriodicalIF":4.3000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Remote Sensing in Ecology and Conservation","FirstCategoryId":"93","ListUrlMain":"https://doi.org/10.1002/rse2.70028","RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Understanding how waterfowl respond to habitat restoration and management activities is crucial for evaluating and refining conservation delivery programs. However, site‐specific waterfowl monitoring is challenging, especially in heavily forested systems such as the Mississippi Alluvial Valley (MAV)—a primary wintering region for waterfowl in North America. We hypothesized that using uncrewed aerial vehicles (UAVs) coupled with deep learning‐based methods for object detection would provide an efficient and effective means for surveying non‐breeding waterfowl on difficult‐to‐access restored wetland sites. Accordingly, during the winters of 2021 and 2022, we surveyed wetland restoration easements in the MAV using a UAV equipped with a dual thermal‐RGB high‐resolution sensor to collect 2360 digital images of non‐breeding waterfowl. We then developed, optimized, and trained a RetinaNet object detection model with a ResNet‐50 backbone to locate and identify seven species of waterfowl drakes, waterfowl hens, and one species of waterbird in the UAV imagery. The final model achieved an average precision and average recall of 88.1% (class ranges from 68.8 to 99.6%) and 89.0% (class ranges from 70.0 to 100%), respectively, at an intersection‐over‐union of 0.5. This study successfully surveys non‐breeding waterfowl in structurally complex and difficult‐to‐access habitats using UAV and, furthermore, provides a functional, open‐source, deep learning‐based object detection framework (DuckNet) for automated detection of waterfowl in UAV imagery. DuckNet provides a user‐friendly interface for running inference on custom images using the model developed here and, additionally, allows users to fine‐tune the model on custom datasets to expand the number of species classes the model can detect. This framework provides managers with an efficient and cost‐effective means to count waterfowl on project sites, thereby improving their capacity to evaluate waterfowl response to wetland restoration efforts.
DuckNet:一个开源的深度学习工具,用于在无人机图像中识别水禽物种
了解水禽对栖息地恢复和管理活动的反应对于评估和完善保护交付计划至关重要。然而,特定地点的水禽监测具有挑战性,特别是在茂密的森林系统中,如密西西比河冲积谷(MAV),这是北美水禽的主要越冬区。我们假设,使用无人驾驶飞行器(uav)结合基于深度学习的目标检测方法,将为在难以进入的恢复湿地上调查非繁殖水禽提供一种高效和有效的手段。因此,在2021年和2022年冬季,我们使用配备双热- RGB高分辨率传感器的无人机对MAV中的湿地恢复地权进行了调查,收集了2360张非繁殖水禽的数字图像。然后,我们开发、优化并训练了一个具有ResNet‐50主干的retanet目标检测模型,以定位和识别无人机图像中的7种水禽、水禽母鸡和一种水鸟。最终模型的平均准确率和平均召回率分别为88.1%(类别范围从68.8到99.6%)和89.0%(类别范围从70.0到100%),交集-过联合为0.5。本研究成功地利用无人机对结构复杂、难以进入栖息地的非繁殖水禽进行了调查,此外,还提供了一个功能强大、开源、基于深度学习的目标检测框架(DuckNet),用于无人机图像中的水禽自动检测。DuckNet提供了一个用户友好的界面,可以使用这里开发的模型在自定义图像上运行推断,此外,允许用户在自定义数据集上微调模型,以扩大模型可以检测的物种类别的数量。该框架为管理人员提供了一种高效且具有成本效益的方法来统计项目地点的水禽,从而提高他们评估水禽对湿地恢复工作的反应的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Remote Sensing in Ecology and Conservation
Remote Sensing in Ecology and Conservation Earth and Planetary Sciences-Computers in Earth Sciences
CiteScore
9.80
自引率
5.50%
发文量
69
审稿时长
18 weeks
期刊介绍: emote Sensing in Ecology and Conservation provides a forum for rapid, peer-reviewed publication of novel, multidisciplinary research at the interface between remote sensing science and ecology and conservation. The journal prioritizes findings that advance the scientific basis of ecology and conservation, promoting the development of remote-sensing based methods relevant to the management of land use and biological systems at all levels, from populations and species to ecosystems and biomes. The journal defines remote sensing in its broadest sense, including data acquisition by hand-held and fixed ground-based sensors, such as camera traps and acoustic recorders, and sensors on airplanes and satellites. The intended journal’s audience includes ecologists, conservation scientists, policy makers, managers of terrestrial and aquatic systems, remote sensing scientists, and students. Remote Sensing in Ecology and Conservation is a fully open access journal from Wiley and the Zoological Society of London. Remote sensing has enormous potential as to provide information on the state of, and pressures on, biological diversity and ecosystem services, at multiple spatial and temporal scales. This new publication provides a forum for multidisciplinary research in remote sensing science, ecological research and conservation science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信