C. R. Sharpe, R. A. Hill, H. M. Chappell, S. E. Green, K. Holden, P. Fergus, C. Chalmers, P. A. Stephens
{"title":"提高公民科学家对英国相机陷阱数据的人工智能准确性","authors":"C. R. Sharpe, R. A. Hill, H. M. Chappell, S. E. Green, K. Holden, P. Fergus, C. Chalmers, P. A. Stephens","doi":"10.1002/rse2.70012","DOIUrl":null,"url":null,"abstract":"As camera traps have become more widely used, extracting information from images at the pace they are acquired has become challenging, resulting in backlogs that delay the communication of results and the use of data for conservation and management. To ameliorate this, artificial intelligence (AI), crowdsourcing to citizen scientists and combined approaches have surfaced as solutions. Using data from the UK mammal monitoring initiative MammalWeb, we assess the accuracies of classifications from registered citizen scientists, anonymous participants and a convolutional neural network (CNN). The engagement of anonymous volunteers was facilitated by the strategic placement of MammalWeb interfaces in a natural history museum with high footfall related to the ‘Dippy on Tour’ exhibition. The accuracy of anonymous volunteer classifications gathered through public interfaces has not been reported previously, and here we consider this form of citizen science in the context of alternative forms of data acquisition. While AI models have performed well at species identification in bespoke settings, here we report model performance on a dataset for which the model in question was not explicitly trained. We also consider combining AI output with that of human volunteers to demonstrate combined workflows that produce high accuracy predictions. We find the consensus of registered users has greater overall accuracy (97%) than the consensus from anonymous contributors (71%); AI accuracy lies in between (78%). A combined approach between registered citizen scientists and AI output provides an overall accuracy of 96%. Further, when the contributions of anonymous citizen scientists are concordant with AI output, 98% accuracy can be achieved. The generality of this last finding merits further investigation, given the potential to gather classifications much more rapidly if public displays are placed in areas of high footfall. We suggest that combined approaches to image classification are optimal when the minimisation of classification errors is desired.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"18 1","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Increasing citizen scientist accuracy with artificial intelligence on UK camera‐trap data\",\"authors\":\"C. R. Sharpe, R. A. Hill, H. M. Chappell, S. E. Green, K. Holden, P. Fergus, C. Chalmers, P. A. Stephens\",\"doi\":\"10.1002/rse2.70012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As camera traps have become more widely used, extracting information from images at the pace they are acquired has become challenging, resulting in backlogs that delay the communication of results and the use of data for conservation and management. To ameliorate this, artificial intelligence (AI), crowdsourcing to citizen scientists and combined approaches have surfaced as solutions. Using data from the UK mammal monitoring initiative MammalWeb, we assess the accuracies of classifications from registered citizen scientists, anonymous participants and a convolutional neural network (CNN). The engagement of anonymous volunteers was facilitated by the strategic placement of MammalWeb interfaces in a natural history museum with high footfall related to the ‘Dippy on Tour’ exhibition. The accuracy of anonymous volunteer classifications gathered through public interfaces has not been reported previously, and here we consider this form of citizen science in the context of alternative forms of data acquisition. While AI models have performed well at species identification in bespoke settings, here we report model performance on a dataset for which the model in question was not explicitly trained. We also consider combining AI output with that of human volunteers to demonstrate combined workflows that produce high accuracy predictions. We find the consensus of registered users has greater overall accuracy (97%) than the consensus from anonymous contributors (71%); AI accuracy lies in between (78%). A combined approach between registered citizen scientists and AI output provides an overall accuracy of 96%. Further, when the contributions of anonymous citizen scientists are concordant with AI output, 98% accuracy can be achieved. The generality of this last finding merits further investigation, given the potential to gather classifications much more rapidly if public displays are placed in areas of high footfall. We suggest that combined approaches to image classification are optimal when the minimisation of classification errors is desired.\",\"PeriodicalId\":21132,\"journal\":{\"name\":\"Remote Sensing in Ecology and Conservation\",\"volume\":\"18 1\",\"pages\":\"\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-05-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Remote Sensing in Ecology and Conservation\",\"FirstCategoryId\":\"93\",\"ListUrlMain\":\"https://doi.org/10.1002/rse2.70012\",\"RegionNum\":2,\"RegionCategory\":\"环境科学与生态学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ECOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Remote Sensing in Ecology and Conservation","FirstCategoryId":"93","ListUrlMain":"https://doi.org/10.1002/rse2.70012","RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECOLOGY","Score":null,"Total":0}
引用次数: 0
摘要
随着相机陷阱的应用越来越广泛,从获取图像的速度中提取信息变得具有挑战性,导致积压,从而延迟了结果的交流和数据的保护和管理使用。为了改善这种情况,人工智能(AI)、向公民科学家众包以及综合方法已经浮出水面。使用来自英国哺乳动物监测倡议MammalWeb的数据,我们评估了注册公民科学家,匿名参与者和卷积神经网络(CNN)分类的准确性。通过将MammalWeb界面战略性地放置在自然历史博物馆中,促进了匿名志愿者的参与,该博物馆与“Dippy on Tour”展览有关。通过公共接口收集的匿名志愿者分类的准确性以前没有报道过,在这里,我们在其他数据获取形式的背景下考虑这种形式的公民科学。虽然人工智能模型在定制设置的物种识别方面表现良好,但在这里,我们报告了模型在未明确训练的数据集上的表现。我们还考虑将人工智能输出与人类志愿者的输出相结合,以展示产生高精度预测的组合工作流程。我们发现注册用户的共识总体准确性(97%)高于匿名贡献者的共识(71%);人工智能的准确率介于两者之间(78%)。注册公民科学家和人工智能输出的结合方法提供了96%的总体准确性。此外,当匿名公民科学家的贡献与人工智能输出一致时,准确率可以达到98%。考虑到如果将公共展览放置在人流量大的地方,可能会更快地收集分类信息,最后这一发现的普遍性值得进一步调查。我们建议,当分类误差最小化时,组合方法对图像分类是最佳的。
Increasing citizen scientist accuracy with artificial intelligence on UK camera‐trap data
As camera traps have become more widely used, extracting information from images at the pace they are acquired has become challenging, resulting in backlogs that delay the communication of results and the use of data for conservation and management. To ameliorate this, artificial intelligence (AI), crowdsourcing to citizen scientists and combined approaches have surfaced as solutions. Using data from the UK mammal monitoring initiative MammalWeb, we assess the accuracies of classifications from registered citizen scientists, anonymous participants and a convolutional neural network (CNN). The engagement of anonymous volunteers was facilitated by the strategic placement of MammalWeb interfaces in a natural history museum with high footfall related to the ‘Dippy on Tour’ exhibition. The accuracy of anonymous volunteer classifications gathered through public interfaces has not been reported previously, and here we consider this form of citizen science in the context of alternative forms of data acquisition. While AI models have performed well at species identification in bespoke settings, here we report model performance on a dataset for which the model in question was not explicitly trained. We also consider combining AI output with that of human volunteers to demonstrate combined workflows that produce high accuracy predictions. We find the consensus of registered users has greater overall accuracy (97%) than the consensus from anonymous contributors (71%); AI accuracy lies in between (78%). A combined approach between registered citizen scientists and AI output provides an overall accuracy of 96%. Further, when the contributions of anonymous citizen scientists are concordant with AI output, 98% accuracy can be achieved. The generality of this last finding merits further investigation, given the potential to gather classifications much more rapidly if public displays are placed in areas of high footfall. We suggest that combined approaches to image classification are optimal when the minimisation of classification errors is desired.
期刊介绍:
emote Sensing in Ecology and Conservation provides a forum for rapid, peer-reviewed publication of novel, multidisciplinary research at the interface between remote sensing science and ecology and conservation. The journal prioritizes findings that advance the scientific basis of ecology and conservation, promoting the development of remote-sensing based methods relevant to the management of land use and biological systems at all levels, from populations and species to ecosystems and biomes. The journal defines remote sensing in its broadest sense, including data acquisition by hand-held and fixed ground-based sensors, such as camera traps and acoustic recorders, and sensors on airplanes and satellites. The intended journal’s audience includes ecologists, conservation scientists, policy makers, managers of terrestrial and aquatic systems, remote sensing scientists, and students.
Remote Sensing in Ecology and Conservation is a fully open access journal from Wiley and the Zoological Society of London. Remote sensing has enormous potential as to provide information on the state of, and pressures on, biological diversity and ecosystem services, at multiple spatial and temporal scales. This new publication provides a forum for multidisciplinary research in remote sensing science, ecological research and conservation science.