Joshua P. Wilson, Tatsuya Amano, Thomas Bregnballe, Alejandro Corregidor‐Castro, Roxane Francis, Diego Gallego‐García, Jarrod C. Hodgson, Landon R. Jones, César R. Luque‐Fernández, Dominik Marchowski, John McEvoy, Ann E. McKellar, W. Chris Oosthuizen, Christian Pfeifer, Martin Renner, José Hernán Sarasola, Mateo Sokač, Roberto Valle, Adam Zbyryt, Richard A. Fuller
{"title":"Big Bird: A global dataset of birds in drone imagery annotated to species level","authors":"Joshua P. Wilson, Tatsuya Amano, Thomas Bregnballe, Alejandro Corregidor‐Castro, Roxane Francis, Diego Gallego‐García, Jarrod C. Hodgson, Landon R. Jones, César R. Luque‐Fernández, Dominik Marchowski, John McEvoy, Ann E. McKellar, W. Chris Oosthuizen, Christian Pfeifer, Martin Renner, José Hernán Sarasola, Mateo Sokač, Roberto Valle, Adam Zbyryt, Richard A. Fuller","doi":"10.1002/rse2.70059","DOIUrl":"https://doi.org/10.1002/rse2.70059","url":null,"abstract":"Drones are a valuable tool for surveying birds. However, surveys are hampered by the costs of manually detecting birds in the resulting images. Researchers are using computer vision to automate this process, but efforts to date generally target a narrow context, such as a single habitat, and do not identify key attributes such as species. To address this, we collected a diverse dataset of drone‐based bird images from existing studies and our own fieldwork. We labelled the birds in these images, detailing their location, species, posture (resting, flying, or other), age (chick, juvenile, or adult), and sex (male, female, or monomorphic). To demonstrate the usefulness of this dataset, we trained a bird detection and identification computer vision model, compared its performance with manual methods, and identified the main predictors of performance. Thirty‐three researchers contributed 23 865 images, captured using 21 different cameras across 11 countries and all 7 continents. We labelled 4824 of these images, containing 49 990 birds from 101 species. Our model processed images 85 times faster than manual processing and achieved a mean average precision (mAP) of 0.91 ± 0.25 for detection and 0.65 ± 0.33 for classification of species, age, and sex. Performance was predicted by the similarity between test and train images (Estimate = 1.3248, <jats:italic>P</jats:italic> = 0.00021), the number of similar classes (Estimate = −0.0742, <jats:italic>P</jats:italic> = 0.0033), the number of train instances (Estimate = 0.0034, <jats:italic>P</jats:italic> = 0.1019), and the number of pixels on the bird (Estimate = 0.0002, <jats:italic>P</jats:italic> = 0.0462). Our drone‐based bird dataset is the most accurately labelled and biologically, environmentally, and digitally diverse to date, laying the foundation for future research. We provide it and the trained model open‐access and urge researchers to continue to work together to assemble datasets that cover broad contexts and are labelled with key conservation metrics.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"112 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arik Kershenbaum, Andrew Markham, Holly Root‐Gutteridge, Bethany Smith, Casey Anderson, Riley McClaughry, Ramjan Chaudhary, Amogh Vishwakarma, Stephen Cummins, Angela Dassow
{"title":"An autonomous network of acoustic detectors to map tiger risk by eavesdropping on prey alarm calls","authors":"Arik Kershenbaum, Andrew Markham, Holly Root‐Gutteridge, Bethany Smith, Casey Anderson, Riley McClaughry, Ramjan Chaudhary, Amogh Vishwakarma, Stephen Cummins, Angela Dassow","doi":"10.1002/rse2.70061","DOIUrl":"https://doi.org/10.1002/rse2.70061","url":null,"abstract":"Tiger ( <jats:italic>Panthera tigris</jats:italic> ) attacks are a frequent source of injuries and fatalities among villagers in Nepal, where many communities make extensive use of dense forests for foraging and grazing of livestock. As conservation efforts have boosted the tiger population in the country, a conflict exists between maintaining traditional practises whilst ensuring human safety and protecting endangered predators. Hence, there is a need for cost‐effective management strategies that do not reduce habitat use by humans or wildlife. Passive acoustic monitoring (PAM) offers a promising approach to mapping tiger presence in real‐time and providing a warning system for villagers. Although tigers vocalize infrequently, their presence triggers alarm calls from prey species, meaning these alarm calls could potentially act as a proxy for detecting tigers. To explore the potential for tracking tigers and other dangerous predators such as leopards using these alarm calls, we designed and tested a PAM system in the Terai region of southern Nepal. We implemented a TinyML low‐memory convolutional neural network (~1000 parameters) for chital deer ( <jats:italic>Axis axis</jats:italic> ) automatic detection—a species that reliably produce loud predator‐specific alarm calls—and deployed a distributed network of 10 autonomous interconnected sensors for continuous operation over 3 months. The network transmits chital deer alarm call events via a cellular‐connected gateway to a remote base station to generate a heatmap of predator risk. Incidences of high predator risk can be used to alert local forest rangers, who can then inform nearby villagers of areas with a higher likelihood of predator presence. The neural net achieved an F1 score of 0.91 in training and 0.72 in the field. We suggest that this proof of concept indicates that automated PAM could be an effective tool for detecting and tracking tigers and other predators and a potentially valuable tool for facilitating human‐wildlife co‐existence.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"23 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Large‐scale characterization of horizontal forest structure from remote sensing optical images","authors":"Xin Xu, Martin Brandt, Xiaowei Tong, Maurice Mugabowindekwe, Yuemin Yue, Sizhuo Li, Qiue Xu, Siyu Liu, Florian Reiner, Kelin Wang, Zhengchao Chen, Yongqing Bai, Rasmus Fensholt","doi":"10.1002/rse2.70058","DOIUrl":"https://doi.org/10.1002/rse2.70058","url":null,"abstract":"Forest structure is an essential variable in forest management and conservation, as it has a direct impact on ecosystem processes and functions. Previous remote sensing studies have primarily focused on the vertical structure of forests, which requires laser point data and may not always be suited to distinguish plantations from old forests. Sub‐meter resolution remote sensing data and tree crown segmentation techniques hold promise in offering detailed information that can support the characterization of forest structure from a horizontal perspective, offering new insights in the tree crown structure at scale. In this study, we generated a dataset with over 5 billion tree crowns and developed a Horizontal Structure Index (HSI) by analyzing spatial relationships among neighboring trees from remote sensing optical images. We first extracted the location and crown size of overstory trees from optical satellite and aerial imagery at sub‐meter resolution. We subsequently calculated the distance between tree crown centers, their angles, the crown size and crown spacing, and linked this information with individual trees. We then used principal component analysis (PCA) to condense the structural information into the HSI and tested it in China, Rwanda and Denmark. Our result showed that the HSI has the potential to distinguish monoculture plantations from other forest types, which provides insights that extend beyond metrics derived from vertical forest structure. The proposed HSI is derived directly from tree‐level attributes and supports a deeper understanding of forest structure from a horizontal perspective, complementing existing remote sensing‐based metrics.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"45 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145993210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peggy A. Bevan, Omiros Pantazis, Holly A.I. Pringle, Guilherme Braga Ferreira, Daniel J. Ingram, Emily K. Madsen, Liam Thomas, Dol Raj Thanet, Thakur Silwal, Santosh Rayamajhi, Gabriel J. Brostow, Oisin Mac Aodha, Kate E. Jones
{"title":"Deep learning‐based ecological analysis of camera trap images is impacted by training data quality and quantity","authors":"Peggy A. Bevan, Omiros Pantazis, Holly A.I. Pringle, Guilherme Braga Ferreira, Daniel J. Ingram, Emily K. Madsen, Liam Thomas, Dol Raj Thanet, Thakur Silwal, Santosh Rayamajhi, Gabriel J. Brostow, Oisin Mac Aodha, Kate E. Jones","doi":"10.1002/rse2.70052","DOIUrl":"https://doi.org/10.1002/rse2.70052","url":null,"abstract":"Large image collections generated from camera traps offer valuable insights into species richness, occupancy, and activity patterns, significantly aiding biodiversity monitoring. However, the manual processing of these data sets is time‐consuming, hindering analytical processes. To address this, deep neural networks have been widely adopted to automate image labelling, but the impact of classification error on key ecological metrics remains unclear. Here, we analyze data from camera trap collections in an African savannah (82,300 labelled images, 47 species) and an Asian sub‐tropical dry forest (40,308 labelled images, 29 species) to compare ecological metrics derived from expert‐generated species identifications with those generated by deep‐learning classification models. We specifically assess the impact of deep‐ learning model architecture, the proportion of label noise in the training data, and the size of the training data set on three key ecological metrics: species richness, occupancy, and activity patterns. We found that predictions of species richness derived from deep neural networks closely match those calculated from expert labels and remained resilient to up to 10% noise in the training data set (mis‐labelled images) and a 50% reduction in the training data set size. We found that our choice of deep‐learning model architecture (ResNet vs. ConvNext‐T) or depth (ResNet18, 50, 101) did not impact predicted ecological metrics. In contrast, species‐specific metrics were more sensitive; less common and visually similar species were disproportionately affected by a reduction in deep neural network accuracy, with consequences for occupancy and diel activity pattern estimates. To ensure the reliability of their findings, practitioners should prioritize creating large, clean training sets and account for class imbalance across species over exploring numerous deep‐learning model architectures.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"88 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145955812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pirta Palola, Sasha Hills, Simon J. Pittman, Edwin A. Hernández‐Delgado, Antoine Collin, Lisa M. Wedding
{"title":"Evaluating land–sea linkages using land cover change and coral reef monitoring data: A case study from northeastern Puerto Rico","authors":"Pirta Palola, Sasha Hills, Simon J. Pittman, Edwin A. Hernández‐Delgado, Antoine Collin, Lisa M. Wedding","doi":"10.1002/rse2.70054","DOIUrl":"https://doi.org/10.1002/rse2.70054","url":null,"abstract":"Land cover change that leads to increased nutrient and sediment runoff is an important driver of change in coral reef ecosystems. Linking landscape change to seascape change is necessary for integrated land–sea management of coral reefs. This study explored the use of freely available satellite products to examine long‐term patterns of change across the land–sea continuum. We focused on northeastern Puerto Rico, where a widespread decline in live coral cover has occurred despite concomitant watershed reforestation that was expected to reduce land‐based threats. The aims of this study were (1) to examine whether these land–sea trends continued in 2000–2015 and (2) to assess the opportunities and limitations associated with using satellite data to inform land–sea management. We applied a Random Forest classifier on Landsat‐7 satellite imagery to assess changes in land cover and landscape development intensity, a spatial index to estimate land‐based pressure on nearshore marine ecosystems. We used field monitoring data to quantify benthic community change. We found that reforestation continued in 2000–2015 (+11%), suggesting reduced land‐based pressure on adjacent reefs in both northern (Luquillo) and eastern (Ceiba‐Fajardo) watersheds. Concomitantly, coral cover continued to decline, and a new aggressive expansion of peyssonnelid algal crust was recorded. Clustering analysis indicated that benthic monitoring sites in the same geographic regions (nearshore/offshore, north/east) followed similar community composition trajectories over time. Our results suggest that continued reforestation and the expected reduction in land‐based pressure have not been sufficient to halt coral cover decline in northeastern Puerto Rico. To improve the characterization and monitoring of the full causal chain from changes in land cover to water quality to benthic communities, advances in satellite‐based water quality mapping in optically shallow waters are needed. A strategic combination of remote sensing and targeted field surveys is required to monitor and mitigate land‐based stressors on coral reefs.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"18 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145902907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. K. Morgan Ernest, Lindsey A. Garner, Ben G. Weinstein, Peter Frederick, Henry Senyondo, Glenda M. Yenni, Ethan P. White
{"title":"Using time‐series remote sensing to identify and track individual bird nests at large scales","authors":"S. K. Morgan Ernest, Lindsey A. Garner, Ben G. Weinstein, Peter Frederick, Henry Senyondo, Glenda M. Yenni, Ethan P. White","doi":"10.1002/rse2.70046","DOIUrl":"https://doi.org/10.1002/rse2.70046","url":null,"abstract":"The challenges of monitoring wildlife often limit the scales and intensity of the data that can be collected. New technologies—such as remote sensing using unoccupied aircraft systems (UASs)—can collect information more quickly, over larger areas, and more frequently than is feasible using ground‐based methods. While airborne imaging is increasingly used to produce data on the location and counts of individuals, its ability to produce individual‐based demographic information is less explored. Repeat airborne imagery to generate an imagery time series provides the potential to track individuals over time to collect information beyond one‐off counts, but doing so necessitates automated approaches to handle the resulting high‐frequency large‐spatial scale imagery. We developed an automated time‐series remote sensing approach to identifying wading bird nests in the Everglades ecosystem of Florida, USA to explore the feasibility and challenges of conducting time‐series based remote sensing on mobile animals at large spatial scales. We combine a computer vision model for detecting birds in weekly UAS imagery of colonies with biology‐informed algorithmic rules to generate an automated approach that identifies likely nests. Comparing the performance of these automated approaches to human review of the same imagery shows that our primary approach identifies nests with comparable performance to human review, and that a secondary approach designed to find quick‐fail nests resulted in high false‐positive rates. We also assessed the ability of both human review and our primary algorithm to find ground‐verified nests in UAS imagery and again found comparable performance, with the exception of nests that fail quickly. Our results showed that automating nest detection, a key first step toward estimating nest success, is possible in complex environments like the Everglades and we discuss a number of challenges and possible uses for these types of approaches.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"16 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145801312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magnus Onyiriagwu, Nereoh Leley, Caleb W. T. Ngaba, Anthony Macharia, Henry Muchiri, Abdalla Kisiwa, Martin Ehbrecht, Delphine Clara Zemp
{"title":"On the compatibility of single‐scan terrestrial LiDAR with digital photogrammetry and field inventory metrics of vegetation structure in forest and agroforestry landscapes","authors":"Magnus Onyiriagwu, Nereoh Leley, Caleb W. T. Ngaba, Anthony Macharia, Henry Muchiri, Abdalla Kisiwa, Martin Ehbrecht, Delphine Clara Zemp","doi":"10.1002/rse2.70047","DOIUrl":"https://doi.org/10.1002/rse2.70047","url":null,"abstract":"In tropical ecosystems, accurately quantifying vegetation structure is crucial to determining their capacity to deliver ecosystem services. Terrestrial laser scanning (TLS) and UAV‐based digital aerial photogrammetry (DAP) are remote sensing tools used to assess vegetation structure, but are challenging to use with conventional methods. Single‐Scan TLS and DTM‐independent DAPs are alternative scanning approaches used to describe vegetation structure; however, it remains unclear to what extent they relate to each other and how accurately they can distinguish forest structural characteristics, including vertical structure, horizontal structure, vegetation density, and structural heterogeneity. First, we quantified bivariate and multivariate correlations between equivalent/analogous structural metrics from these data sources using principal component and Procrustes analysis. We then evaluated their ability to characterize the forest and agroforestry landscapes. DAP, TLS, and Field metrics were moderately aligned for vegetation density, canopy top height, and gap dynamics, but differed in height variability and surface heterogeneity, reflecting differences in data structure. DAP and TLS achieved the highest accuracy in classifying forests and agroforestry plots, with overall accuracies of 89% and 78%, respectively. Though the field metrics were unable to resolve 3D characteristics related to heterogeneity, their capacity to distinguish the stand structure at 69% accuracy was driven by the relative pattern of its suite of metrics. The results indicate that the single‐scan TLS and DTM‐independent DAP yield meaningful descriptors of vegetation structure, which, when combined, can provide a comprehensive representation of the structure in these tropical landscapes.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"93 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kelsey S. Huelsman, Howard E. Epstein, Xi Yang, Roderick Walker
{"title":"Using phenology to improve invasive plant detection in fine‐scale hyperspectral drone‐based images","authors":"Kelsey S. Huelsman, Howard E. Epstein, Xi Yang, Roderick Walker","doi":"10.1002/rse2.70049","DOIUrl":"https://doi.org/10.1002/rse2.70049","url":null,"abstract":"Mapping and managing invasive plants are top priorities for land managers, but traditional approaches are time and labor‐intensive. To improve detection efforts, we explored the effectiveness of hyperspectral, drone‐based detection algorithms that incorporate phenology. We collected fine‐resolution (3 cm) hyperspectral images using a drone equipped with a Nano‐Hyperspec imager on seven dates from April to November, 2020 and then used a subsample of pixels from the images to develop multitemporal detection algorithms for three invasive plant species within heterogeneous vegetation communities. The three species are invasive in much of the U.S. and in Virginia, where the data were collected: <jats:italic>Ailanthus altissima</jats:italic> (tree of heaven), <jats:italic>Elaeagnus umbellata</jats:italic> (autumn olive), and <jats:italic>Rhamnus davurica</jats:italic> (Dahurian buckthorn). We determined when each species could be accurately detected, what spectral features allowed for detection, and the consistency of those features over a growing season. All three species could be detected in June. Only <jats:italic>E. umbellata</jats:italic> had consistently accurate algorithms and used consistent features in the visible and red edge across the growing season. Its most accurate detection algorithms in the summer included features in the yellow‐orange spectral region. <jats:italic>A. altissima</jats:italic> and <jats:italic>R. davurica</jats:italic> were both detectable in the mid‐ and late‐growing seasons, with little overlap in key spectral features across dates. Our results indicate that even a small subset of data from hyperspectral imagery can be used to accurately detect invasive plants in heterogeneous plant communities, and that incorporating species‐specific phenological traits into detection algorithms improves detection, laying methodological and theoretical groundwork for the future of invasive species management.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"17 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145704601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katarzyna Bojarska, Michał Żmihorski, Morteza Naderi, J. David Blount, Mark Chynoweth, Emrah Coban, Çağan H. Şekercioğlu, Josip Kusak
{"title":"Cameras do not always take a full picture: wolf activity patterns revealed by accelerometers versus road‐positioned camera traps","authors":"Katarzyna Bojarska, Michał Żmihorski, Morteza Naderi, J. David Blount, Mark Chynoweth, Emrah Coban, Çağan H. Şekercioğlu, Josip Kusak","doi":"10.1002/rse2.70045","DOIUrl":"https://doi.org/10.1002/rse2.70045","url":null,"abstract":"While animal‐attached devices provide the most detailed information on animal behaviour, camera traps have become an increasingly popular non‐invasive alternative in wildlife ecology. Here, we compared activity patterns of wolves ( <jats:italic>Canis lupus</jats:italic> ) assessed with accelerometers and road‐positioned camera traps in two study areas in Croatia and north‐eastern Türkiye. We used accelerometer data from 37 wolves and camera trap data from 82,375 camera trap days at 358 road locations from 2010 to 2021. We fitted generalised additive mixed models to determine the times of day and parts of the year with the highest and lowest wolf activity and correlated the predictions between accelerometer‐ and camera‐based models. Wolf activity patterns predicted from road‐positioned camera traps and accelerometer data were significantly positively correlated, but the strength of the correlation varied among areas, times of day and seasons. The lowest and highest activity periods showed little overlap between the two methods. In both study areas, camera trap data failed to detect the increase in daylight activity during the pup‐rearing season evident in accelerometer data. Overall, camera traps proved adequate for describing general daily and seasonal wolf activity patterns, while discrepancies between the two methods may largely be attributed to camera placement on roads. In light of the increasing use of camera traps in ecological research, our results highlight the value of animal‐attached devices for tracking individuals and recommend caution when interpreting activity patterns from road‐mounted cameras.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"29 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan C. Azofeifa‐Solano, Miles J. G. Parsons, James Kemp, Rohan M. Brooker, Robert D. McCauley, Shyam Madhusudhana, Mathew Wyatt, Stephen D. Simpson, Christine Erbe
{"title":"Impact of parameterization in multiple acoustic index comparisons: practical cases in terrestrial and underwater soundscapes","authors":"Juan C. Azofeifa‐Solano, Miles J. G. Parsons, James Kemp, Rohan M. Brooker, Robert D. McCauley, Shyam Madhusudhana, Mathew Wyatt, Stephen D. Simpson, Christine Erbe","doi":"10.1002/rse2.70044","DOIUrl":"https://doi.org/10.1002/rse2.70044","url":null,"abstract":"Acoustic indices are increasingly used to characterize soundscapes and infer biodiversity patterns in terrestrial and marine environments. However, methodological choices during data collection and signal processing—particularly the selection of sampling frequency, Fourier transform number of points and window overlap—can influence the output of acoustic indices, multivariate analysis and their ecological interpretations. Here, we evaluated the effects of these parameters on multivariate soundscape separation with two example environment comparisons: terrestrial (Bushland vs. Urban) and underwater ( <jats:italic>Pocillopora</jats:italic> dominated vs. Non‐ <jats:italic>Pocillopora</jats:italic> dominated). We assessed the influence of parameterization by computing 432 spectrogram configurations per recording across five commonly used acoustic indices. Using non‐metric multidimensional scaling, multivariate descriptors and Bayesian models, we found that parameter selection influenced soundscape separation in each environment example with data‐specific interactions. For instance, greater NFFT values increased centroid distance between habitats in terrestrial soundscapes but decreased it in underwater soundscapes. Our results confirm earlier findings that acoustic indices can be sensitive to spectrogram parameterization, and extend these by demonstrating, with a systematic multivariate framework, how interactions among sampling frequency, NFFT and window overlap affect soundscape separation across environments. This approach emphasizes the need for parameter sensitivity testing, transparent reporting and careful interpretation when comparing soundscapes. Code: <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"https://github.com/juancarlosazofeifasolano/acousticindices_parametrisation.git\">https://github.com/juancarlosazofeifasolano/acousticindices_parametrisation.git</jats:ext-link> .","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"21 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}