Zack Loken, Kevin M. Ringelman, Anne Mini, J. Dale James, Mike Mitchell
{"title":"DuckNet:一个开源的深度学习工具,用于在无人机图像中识别水禽物种","authors":"Zack Loken, Kevin M. Ringelman, Anne Mini, J. Dale James, Mike Mitchell","doi":"10.1002/rse2.70028","DOIUrl":null,"url":null,"abstract":"Understanding how waterfowl respond to habitat restoration and management activities is crucial for evaluating and refining conservation delivery programs. However, site‐specific waterfowl monitoring is challenging, especially in heavily forested systems such as the Mississippi Alluvial Valley (MAV)—a primary wintering region for waterfowl in North America. We hypothesized that using uncrewed aerial vehicles (UAVs) coupled with deep learning‐based methods for object detection would provide an efficient and effective means for surveying non‐breeding waterfowl on difficult‐to‐access restored wetland sites. Accordingly, during the winters of 2021 and 2022, we surveyed wetland restoration easements in the MAV using a UAV equipped with a dual thermal‐RGB high‐resolution sensor to collect 2360 digital images of non‐breeding waterfowl. We then developed, optimized, and trained a RetinaNet object detection model with a ResNet‐50 backbone to locate and identify seven species of waterfowl drakes, waterfowl hens, and one species of waterbird in the UAV imagery. The final model achieved an average precision and average recall of 88.1% (class ranges from 68.8 to 99.6%) and 89.0% (class ranges from 70.0 to 100%), respectively, at an intersection‐over‐union of 0.5. This study successfully surveys non‐breeding waterfowl in structurally complex and difficult‐to‐access habitats using UAV and, furthermore, provides a functional, open‐source, deep learning‐based object detection framework (DuckNet) for automated detection of waterfowl in UAV imagery. DuckNet provides a user‐friendly interface for running inference on custom images using the model developed here and, additionally, allows users to fine‐tune the model on custom datasets to expand the number of species classes the model can detect. This framework provides managers with an efficient and cost‐effective means to count waterfowl on project sites, thereby improving their capacity to evaluate waterfowl response to wetland restoration efforts.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"78 1","pages":""},"PeriodicalIF":4.3000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DuckNet: an open‐source deep learning tool for waterfowl species identification in UAV imagery\",\"authors\":\"Zack Loken, Kevin M. Ringelman, Anne Mini, J. Dale James, Mike Mitchell\",\"doi\":\"10.1002/rse2.70028\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Understanding how waterfowl respond to habitat restoration and management activities is crucial for evaluating and refining conservation delivery programs. However, site‐specific waterfowl monitoring is challenging, especially in heavily forested systems such as the Mississippi Alluvial Valley (MAV)—a primary wintering region for waterfowl in North America. We hypothesized that using uncrewed aerial vehicles (UAVs) coupled with deep learning‐based methods for object detection would provide an efficient and effective means for surveying non‐breeding waterfowl on difficult‐to‐access restored wetland sites. Accordingly, during the winters of 2021 and 2022, we surveyed wetland restoration easements in the MAV using a UAV equipped with a dual thermal‐RGB high‐resolution sensor to collect 2360 digital images of non‐breeding waterfowl. We then developed, optimized, and trained a RetinaNet object detection model with a ResNet‐50 backbone to locate and identify seven species of waterfowl drakes, waterfowl hens, and one species of waterbird in the UAV imagery. The final model achieved an average precision and average recall of 88.1% (class ranges from 68.8 to 99.6%) and 89.0% (class ranges from 70.0 to 100%), respectively, at an intersection‐over‐union of 0.5. This study successfully surveys non‐breeding waterfowl in structurally complex and difficult‐to‐access habitats using UAV and, furthermore, provides a functional, open‐source, deep learning‐based object detection framework (DuckNet) for automated detection of waterfowl in UAV imagery. DuckNet provides a user‐friendly interface for running inference on custom images using the model developed here and, additionally, allows users to fine‐tune the model on custom datasets to expand the number of species classes the model can detect. This framework provides managers with an efficient and cost‐effective means to count waterfowl on project sites, thereby improving their capacity to evaluate waterfowl response to wetland restoration efforts.\",\"PeriodicalId\":21132,\"journal\":{\"name\":\"Remote Sensing in Ecology and Conservation\",\"volume\":\"78 1\",\"pages\":\"\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Remote Sensing in Ecology and Conservation\",\"FirstCategoryId\":\"93\",\"ListUrlMain\":\"https://doi.org/10.1002/rse2.70028\",\"RegionNum\":2,\"RegionCategory\":\"环境科学与生态学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ECOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Remote Sensing in Ecology and Conservation","FirstCategoryId":"93","ListUrlMain":"https://doi.org/10.1002/rse2.70028","RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECOLOGY","Score":null,"Total":0}
DuckNet: an open‐source deep learning tool for waterfowl species identification in UAV imagery
Understanding how waterfowl respond to habitat restoration and management activities is crucial for evaluating and refining conservation delivery programs. However, site‐specific waterfowl monitoring is challenging, especially in heavily forested systems such as the Mississippi Alluvial Valley (MAV)—a primary wintering region for waterfowl in North America. We hypothesized that using uncrewed aerial vehicles (UAVs) coupled with deep learning‐based methods for object detection would provide an efficient and effective means for surveying non‐breeding waterfowl on difficult‐to‐access restored wetland sites. Accordingly, during the winters of 2021 and 2022, we surveyed wetland restoration easements in the MAV using a UAV equipped with a dual thermal‐RGB high‐resolution sensor to collect 2360 digital images of non‐breeding waterfowl. We then developed, optimized, and trained a RetinaNet object detection model with a ResNet‐50 backbone to locate and identify seven species of waterfowl drakes, waterfowl hens, and one species of waterbird in the UAV imagery. The final model achieved an average precision and average recall of 88.1% (class ranges from 68.8 to 99.6%) and 89.0% (class ranges from 70.0 to 100%), respectively, at an intersection‐over‐union of 0.5. This study successfully surveys non‐breeding waterfowl in structurally complex and difficult‐to‐access habitats using UAV and, furthermore, provides a functional, open‐source, deep learning‐based object detection framework (DuckNet) for automated detection of waterfowl in UAV imagery. DuckNet provides a user‐friendly interface for running inference on custom images using the model developed here and, additionally, allows users to fine‐tune the model on custom datasets to expand the number of species classes the model can detect. This framework provides managers with an efficient and cost‐effective means to count waterfowl on project sites, thereby improving their capacity to evaluate waterfowl response to wetland restoration efforts.
期刊介绍:
emote Sensing in Ecology and Conservation provides a forum for rapid, peer-reviewed publication of novel, multidisciplinary research at the interface between remote sensing science and ecology and conservation. The journal prioritizes findings that advance the scientific basis of ecology and conservation, promoting the development of remote-sensing based methods relevant to the management of land use and biological systems at all levels, from populations and species to ecosystems and biomes. The journal defines remote sensing in its broadest sense, including data acquisition by hand-held and fixed ground-based sensors, such as camera traps and acoustic recorders, and sensors on airplanes and satellites. The intended journal’s audience includes ecologists, conservation scientists, policy makers, managers of terrestrial and aquatic systems, remote sensing scientists, and students.
Remote Sensing in Ecology and Conservation is a fully open access journal from Wiley and the Zoological Society of London. Remote sensing has enormous potential as to provide information on the state of, and pressures on, biological diversity and ecosystem services, at multiple spatial and temporal scales. This new publication provides a forum for multidisciplinary research in remote sensing science, ecological research and conservation science.