久经考验的真实vs.闪亮的新:长期水生数据集的方法切换

IF 5.1 2区 地球科学 Q1 LIMNOLOGY
Catriona L. C. Jones, Kelsey J. Solomon, Emily R. Arsenault, Katlin D. Edwards, Atefah Hosseini, Hadis Miraly, Alexander W. Mott, Karla Münzner, Igor Ogashawara, Carly R. Olson, Meredith E. Seeley, John C. Tracey
{"title":"久经考验的真实vs.闪亮的新:长期水生数据集的方法切换","authors":"Catriona L. C. Jones,&nbsp;Kelsey J. Solomon,&nbsp;Emily R. Arsenault,&nbsp;Katlin D. Edwards,&nbsp;Atefah Hosseini,&nbsp;Hadis Miraly,&nbsp;Alexander W. Mott,&nbsp;Karla Münzner,&nbsp;Igor Ogashawara,&nbsp;Carly R. Olson,&nbsp;Meredith E. Seeley,&nbsp;John C. Tracey","doi":"10.1002/lol2.10438","DOIUrl":null,"url":null,"abstract":"<p>Long-term datasets are foundational resources in aquatic research, vital for establishing baselines and detecting shifts in aquatic biodiversity, water quality, and ecosystem function. For example, the Hawaii Ocean Time Series (HOTS), which has sampled biogeochemical data at Station Aloha in the North Pacific Subtropical Gyre since 1988, played a crucial role in documenting temporal variability in ocean carbon inventories and fluxes and provided the first evidence for a multi-decade decline in marine pH associated with climate change (Dore et al. <span>2009</span>). Research from U.S. National Science Foundation Long Term Ecological Research sites has advanced understanding of ecosystem dynamics, including the long-term effects of invasive species on lakes (e.g., Walsh et al. <span>2016</span>) and the influence of disturbances on watershed biogeochemical processes (e.g., Miniat et al. <span>2021</span>). Finally, another NSF initiative, the Continuous Plankton Recorder surveys, are some of the longest-running aquatic long-term datasets, with one survey collecting data continuously since 1931 (www.cprsurvey.org). These surveys have demonstrated how climate change is affecting plankton communities.</p><p>The insights gained from such long-term datasets are only as robust as the data that have been collected. It is, therefore, a priority for those managing long-term datasets to ensure data quality. Advances in technology or sampling methods often leave researchers with a dilemma: switch to the newer method (i.e., “emerging” method) and take advantage of novel technologies, or continue with the older, existing method (i.e., “established” method) and maintain continuity in sampling protocol. Long-term dataset managers may choose to adopt emerging methods for many reasons: the emerging method could be faster, more efficient and/or more cost-effective, it might offer real-time data collection, or it could reveal previously unattainable or undetectable information. As a group of early career researchers, many of the authors of this essay have been in the position of taking responsibility for managing long-term aquatic datasets and have seen first-hand the importance of mindful data stewardship. Researchers commonly acknowledge the challenges associated with method switching in long-term monitoring programs. However, these discussions often occur informally between small groups of colleagues, not among the wider scientific community. As such, the literature lacks first-hand examples of how to proceed with adopting new methods. Here, our goal is to initiate broader discussion among current and future managers of long-term datasets in the aquatic sciences to help guide decisions about method switching. To achieve this, we discuss indicators of method-switching successes and failures. Then, we outline three case studies of method-switching successes in long-term datasets and suggest a set of best practices. We acknowledge that certain emerging methods produce data resembling those of the established methods but improve efficiency, speed, or cost-effectiveness, whereas other emerging methods generate entirely new data types. While the decision to begin collecting novel data types is worthy of discussion, we focus on the former.</p><p>A successful method switch in long-term data collection depends on two factors: (1) achieving the pre-established goals of the method switch and (2) ensuring that the data collected from both methods are comparable, thereby maintaining the dataset continuity. Thus, it is important for researchers to establish clear goals for a method switch and to follow well-defined best practices throughout the method switch to ensure continuity (<i>see</i> Section Best practices for method switching of this paper for best practices). As new technological advances enable the collection of data at increasingly finer resolutions, switching to methods that are faster, more efficient, or more cost-effective can be appealing to researchers managing long-term datasets. Researchers may have many reasons to switch methods. For example, the increased availability of remote sensors and autonomous vehicles provides researchers with significantly more real-time data than manual sampling methods, while reducing researcher time and increasing data throughput (Latifi et al. <span>2023</span>). Furthermore, the rise of AI and machine learning has increased the amount of data that can be processed and information that can be obtained from a dataset (e.g., Fuchs et al. <span>2022</span>; Kraft et al. <span>2022</span>). In addition, emerging technologies can enable the collection of previously unattainable or undetectable data, for example, lowering detection limits (e.g., Leskinen et al. <span>2012</span>) or using eDNA to monitor rare, cryptic, or invasive species (e.g., Barata et al. <span>2021</span>). The long-term, collaborative nature of these datasets means that collection and management will be carried out by multiple generations of students, post docs, faculty, and government/agency scientists. The dynamic nature of such research teams means that establishing clear goals from inception and following best practices during the transition will aid in maintaining the integrity of long-term datasets during method switches.</p><p>Accordingly, method switching failures in long-term datasets usually occur when (1) the pre-established goal(s) are not met and/or (2) the data collected from the established and emerging method are not comparable, resulting in a discontinuous dataset. While not meeting a pre-established goal is often straightforward (e.g., financial or labor cost was not reduced, the detection limit was not lowered, etc.), discontinuous datasets will compromise one's ability to capture ecological insights but can occur for a variety of reasons. For example, what was measured previously and what the new method captures may be representative of the same ecological process but are not the same measurement (e.g., algal chlorophyll <i>a</i> vs. total cell biovolume; Ramaraj et al. <span>2013</span>). Furthermore, as emerging technologies increase sample throughput through automation, the scale of data collection may change dramatically. This can make statistical comparison between the established and emerging methods challenging (Cutter <span>2013</span>). Finally, switching to a method that lowers the limits of quantification or detection can sometimes be straightforward to account for. However, in other cases, this may complicate comparisons between old and new methods. While the collectors of such data may appreciate and understand these changes, long-term datasets often serve a variety of different end-users, making the ability to capture ecological insights increasingly difficult.</p><p>Due to the numerous challenges associated with method switches (Fig. 1), it can be difficult to define a method switch as a success or a failure; rather, outcomes exist on a continuum. For example, while a method switch might be considered a “success” within its own long-term data collection program, it may pose challenges for other researchers aiming for methodological consistency between studies. Switching to more advanced technology might make it more difficult for other labs to replicate methodologies, reducing global access to and comparability among datasets. Furthermore, researchers may be motivated to repeatedly switch methods to capture the “best” data when a field is just establishing long-term datasets. Chasing “the best,” unfortunately, can lead to delays in establishing datasets that would benefit policy and regulation. A prime example of this is micro- and nanoplastics pollution research, which suffers from a lack of continuous datasets despite a decade of widespread interest in the topic (Lusher and Primpke <span>2023</span>). Given these nuanced challenges, there are often many reasons to avoid method switching altogether.</p><p>To highlight method-switching successes in long-term datasets, we present case studies that fall into three common categories of method switching: (1) manual-to-manual, (2) automated-to-automated switching, and (3) manual-to-automated. Here, “manual” refers to methods where the majority of the method, analysis, and interpretation is carried out by a person (e.g., measuring Secchi disk depth or cell counting with light microscopy). Conversely, “automated” refers to methods where most of the method, analysis, and interpretation is carried out by a machine or an automated process (e.g., satellite imaging or flow cytometry).</p><p>Long-term aquatic datasets provide invaluable insights. However, maintaining their integrity amidst evolving methodologies poses challenges. This raises two considerations for dataset managers: whether to adopt emerging methodologies or maintain established techniques and how to ensure data integrity during a method transition. While the decision to switch methods is case-specific, our paper addresses the critical need for structured discussions on such switches and the development of standardized guidelines for transparent data reporting. With the aquatic sciences trending toward increasingly collaborative, interdisciplinary research that employs automated data collection methods and Big Data (Durden et al. <span>2017</span>), dataset managers must deliberate on adapting their data collection methods to ensure continuous and effective monitoring of Earth's ecosystems.</p><p>Catriona L. C. Jones and Kelsey J. Solomon co-led the entire manuscript effort, contributing equally, and created the graphics. All authors contributed to the conceptualization of the essay topic and the writing and editing of the manuscript.</p>","PeriodicalId":18128,"journal":{"name":"Limnology and Oceanography Letters","volume":"10 2","pages":"151-157"},"PeriodicalIF":5.1000,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/lol2.10438","citationCount":"0","resultStr":"{\"title\":\"Tried and true vs. shiny and new: Method switching in long-term aquatic datasets\",\"authors\":\"Catriona L. C. Jones,&nbsp;Kelsey J. Solomon,&nbsp;Emily R. Arsenault,&nbsp;Katlin D. Edwards,&nbsp;Atefah Hosseini,&nbsp;Hadis Miraly,&nbsp;Alexander W. Mott,&nbsp;Karla Münzner,&nbsp;Igor Ogashawara,&nbsp;Carly R. Olson,&nbsp;Meredith E. Seeley,&nbsp;John C. Tracey\",\"doi\":\"10.1002/lol2.10438\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Long-term datasets are foundational resources in aquatic research, vital for establishing baselines and detecting shifts in aquatic biodiversity, water quality, and ecosystem function. For example, the Hawaii Ocean Time Series (HOTS), which has sampled biogeochemical data at Station Aloha in the North Pacific Subtropical Gyre since 1988, played a crucial role in documenting temporal variability in ocean carbon inventories and fluxes and provided the first evidence for a multi-decade decline in marine pH associated with climate change (Dore et al. <span>2009</span>). Research from U.S. National Science Foundation Long Term Ecological Research sites has advanced understanding of ecosystem dynamics, including the long-term effects of invasive species on lakes (e.g., Walsh et al. <span>2016</span>) and the influence of disturbances on watershed biogeochemical processes (e.g., Miniat et al. <span>2021</span>). Finally, another NSF initiative, the Continuous Plankton Recorder surveys, are some of the longest-running aquatic long-term datasets, with one survey collecting data continuously since 1931 (www.cprsurvey.org). These surveys have demonstrated how climate change is affecting plankton communities.</p><p>The insights gained from such long-term datasets are only as robust as the data that have been collected. It is, therefore, a priority for those managing long-term datasets to ensure data quality. Advances in technology or sampling methods often leave researchers with a dilemma: switch to the newer method (i.e., “emerging” method) and take advantage of novel technologies, or continue with the older, existing method (i.e., “established” method) and maintain continuity in sampling protocol. Long-term dataset managers may choose to adopt emerging methods for many reasons: the emerging method could be faster, more efficient and/or more cost-effective, it might offer real-time data collection, or it could reveal previously unattainable or undetectable information. As a group of early career researchers, many of the authors of this essay have been in the position of taking responsibility for managing long-term aquatic datasets and have seen first-hand the importance of mindful data stewardship. Researchers commonly acknowledge the challenges associated with method switching in long-term monitoring programs. However, these discussions often occur informally between small groups of colleagues, not among the wider scientific community. As such, the literature lacks first-hand examples of how to proceed with adopting new methods. Here, our goal is to initiate broader discussion among current and future managers of long-term datasets in the aquatic sciences to help guide decisions about method switching. To achieve this, we discuss indicators of method-switching successes and failures. Then, we outline three case studies of method-switching successes in long-term datasets and suggest a set of best practices. We acknowledge that certain emerging methods produce data resembling those of the established methods but improve efficiency, speed, or cost-effectiveness, whereas other emerging methods generate entirely new data types. While the decision to begin collecting novel data types is worthy of discussion, we focus on the former.</p><p>A successful method switch in long-term data collection depends on two factors: (1) achieving the pre-established goals of the method switch and (2) ensuring that the data collected from both methods are comparable, thereby maintaining the dataset continuity. Thus, it is important for researchers to establish clear goals for a method switch and to follow well-defined best practices throughout the method switch to ensure continuity (<i>see</i> Section Best practices for method switching of this paper for best practices). As new technological advances enable the collection of data at increasingly finer resolutions, switching to methods that are faster, more efficient, or more cost-effective can be appealing to researchers managing long-term datasets. Researchers may have many reasons to switch methods. For example, the increased availability of remote sensors and autonomous vehicles provides researchers with significantly more real-time data than manual sampling methods, while reducing researcher time and increasing data throughput (Latifi et al. <span>2023</span>). Furthermore, the rise of AI and machine learning has increased the amount of data that can be processed and information that can be obtained from a dataset (e.g., Fuchs et al. <span>2022</span>; Kraft et al. <span>2022</span>). In addition, emerging technologies can enable the collection of previously unattainable or undetectable data, for example, lowering detection limits (e.g., Leskinen et al. <span>2012</span>) or using eDNA to monitor rare, cryptic, or invasive species (e.g., Barata et al. <span>2021</span>). The long-term, collaborative nature of these datasets means that collection and management will be carried out by multiple generations of students, post docs, faculty, and government/agency scientists. The dynamic nature of such research teams means that establishing clear goals from inception and following best practices during the transition will aid in maintaining the integrity of long-term datasets during method switches.</p><p>Accordingly, method switching failures in long-term datasets usually occur when (1) the pre-established goal(s) are not met and/or (2) the data collected from the established and emerging method are not comparable, resulting in a discontinuous dataset. While not meeting a pre-established goal is often straightforward (e.g., financial or labor cost was not reduced, the detection limit was not lowered, etc.), discontinuous datasets will compromise one's ability to capture ecological insights but can occur for a variety of reasons. For example, what was measured previously and what the new method captures may be representative of the same ecological process but are not the same measurement (e.g., algal chlorophyll <i>a</i> vs. total cell biovolume; Ramaraj et al. <span>2013</span>). Furthermore, as emerging technologies increase sample throughput through automation, the scale of data collection may change dramatically. This can make statistical comparison between the established and emerging methods challenging (Cutter <span>2013</span>). Finally, switching to a method that lowers the limits of quantification or detection can sometimes be straightforward to account for. However, in other cases, this may complicate comparisons between old and new methods. While the collectors of such data may appreciate and understand these changes, long-term datasets often serve a variety of different end-users, making the ability to capture ecological insights increasingly difficult.</p><p>Due to the numerous challenges associated with method switches (Fig. 1), it can be difficult to define a method switch as a success or a failure; rather, outcomes exist on a continuum. For example, while a method switch might be considered a “success” within its own long-term data collection program, it may pose challenges for other researchers aiming for methodological consistency between studies. Switching to more advanced technology might make it more difficult for other labs to replicate methodologies, reducing global access to and comparability among datasets. Furthermore, researchers may be motivated to repeatedly switch methods to capture the “best” data when a field is just establishing long-term datasets. Chasing “the best,” unfortunately, can lead to delays in establishing datasets that would benefit policy and regulation. A prime example of this is micro- and nanoplastics pollution research, which suffers from a lack of continuous datasets despite a decade of widespread interest in the topic (Lusher and Primpke <span>2023</span>). Given these nuanced challenges, there are often many reasons to avoid method switching altogether.</p><p>To highlight method-switching successes in long-term datasets, we present case studies that fall into three common categories of method switching: (1) manual-to-manual, (2) automated-to-automated switching, and (3) manual-to-automated. Here, “manual” refers to methods where the majority of the method, analysis, and interpretation is carried out by a person (e.g., measuring Secchi disk depth or cell counting with light microscopy). Conversely, “automated” refers to methods where most of the method, analysis, and interpretation is carried out by a machine or an automated process (e.g., satellite imaging or flow cytometry).</p><p>Long-term aquatic datasets provide invaluable insights. However, maintaining their integrity amidst evolving methodologies poses challenges. This raises two considerations for dataset managers: whether to adopt emerging methodologies or maintain established techniques and how to ensure data integrity during a method transition. While the decision to switch methods is case-specific, our paper addresses the critical need for structured discussions on such switches and the development of standardized guidelines for transparent data reporting. With the aquatic sciences trending toward increasingly collaborative, interdisciplinary research that employs automated data collection methods and Big Data (Durden et al. <span>2017</span>), dataset managers must deliberate on adapting their data collection methods to ensure continuous and effective monitoring of Earth's ecosystems.</p><p>Catriona L. C. Jones and Kelsey J. Solomon co-led the entire manuscript effort, contributing equally, and created the graphics. All authors contributed to the conceptualization of the essay topic and the writing and editing of the manuscript.</p>\",\"PeriodicalId\":18128,\"journal\":{\"name\":\"Limnology and Oceanography Letters\",\"volume\":\"10 2\",\"pages\":\"151-157\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-01-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/lol2.10438\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Limnology and Oceanography Letters\",\"FirstCategoryId\":\"93\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/lol2.10438\",\"RegionNum\":2,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LIMNOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Limnology and Oceanography Letters","FirstCategoryId":"93","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/lol2.10438","RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LIMNOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

本文章由计算机程序翻译,如有差异,请以英文原文为准。

Tried and true vs. shiny and new: Method switching in long-term aquatic datasets

Tried and true vs. shiny and new: Method switching in long-term aquatic datasets

Long-term datasets are foundational resources in aquatic research, vital for establishing baselines and detecting shifts in aquatic biodiversity, water quality, and ecosystem function. For example, the Hawaii Ocean Time Series (HOTS), which has sampled biogeochemical data at Station Aloha in the North Pacific Subtropical Gyre since 1988, played a crucial role in documenting temporal variability in ocean carbon inventories and fluxes and provided the first evidence for a multi-decade decline in marine pH associated with climate change (Dore et al. 2009). Research from U.S. National Science Foundation Long Term Ecological Research sites has advanced understanding of ecosystem dynamics, including the long-term effects of invasive species on lakes (e.g., Walsh et al. 2016) and the influence of disturbances on watershed biogeochemical processes (e.g., Miniat et al. 2021). Finally, another NSF initiative, the Continuous Plankton Recorder surveys, are some of the longest-running aquatic long-term datasets, with one survey collecting data continuously since 1931 (www.cprsurvey.org). These surveys have demonstrated how climate change is affecting plankton communities.

The insights gained from such long-term datasets are only as robust as the data that have been collected. It is, therefore, a priority for those managing long-term datasets to ensure data quality. Advances in technology or sampling methods often leave researchers with a dilemma: switch to the newer method (i.e., “emerging” method) and take advantage of novel technologies, or continue with the older, existing method (i.e., “established” method) and maintain continuity in sampling protocol. Long-term dataset managers may choose to adopt emerging methods for many reasons: the emerging method could be faster, more efficient and/or more cost-effective, it might offer real-time data collection, or it could reveal previously unattainable or undetectable information. As a group of early career researchers, many of the authors of this essay have been in the position of taking responsibility for managing long-term aquatic datasets and have seen first-hand the importance of mindful data stewardship. Researchers commonly acknowledge the challenges associated with method switching in long-term monitoring programs. However, these discussions often occur informally between small groups of colleagues, not among the wider scientific community. As such, the literature lacks first-hand examples of how to proceed with adopting new methods. Here, our goal is to initiate broader discussion among current and future managers of long-term datasets in the aquatic sciences to help guide decisions about method switching. To achieve this, we discuss indicators of method-switching successes and failures. Then, we outline three case studies of method-switching successes in long-term datasets and suggest a set of best practices. We acknowledge that certain emerging methods produce data resembling those of the established methods but improve efficiency, speed, or cost-effectiveness, whereas other emerging methods generate entirely new data types. While the decision to begin collecting novel data types is worthy of discussion, we focus on the former.

A successful method switch in long-term data collection depends on two factors: (1) achieving the pre-established goals of the method switch and (2) ensuring that the data collected from both methods are comparable, thereby maintaining the dataset continuity. Thus, it is important for researchers to establish clear goals for a method switch and to follow well-defined best practices throughout the method switch to ensure continuity (see Section Best practices for method switching of this paper for best practices). As new technological advances enable the collection of data at increasingly finer resolutions, switching to methods that are faster, more efficient, or more cost-effective can be appealing to researchers managing long-term datasets. Researchers may have many reasons to switch methods. For example, the increased availability of remote sensors and autonomous vehicles provides researchers with significantly more real-time data than manual sampling methods, while reducing researcher time and increasing data throughput (Latifi et al. 2023). Furthermore, the rise of AI and machine learning has increased the amount of data that can be processed and information that can be obtained from a dataset (e.g., Fuchs et al. 2022; Kraft et al. 2022). In addition, emerging technologies can enable the collection of previously unattainable or undetectable data, for example, lowering detection limits (e.g., Leskinen et al. 2012) or using eDNA to monitor rare, cryptic, or invasive species (e.g., Barata et al. 2021). The long-term, collaborative nature of these datasets means that collection and management will be carried out by multiple generations of students, post docs, faculty, and government/agency scientists. The dynamic nature of such research teams means that establishing clear goals from inception and following best practices during the transition will aid in maintaining the integrity of long-term datasets during method switches.

Accordingly, method switching failures in long-term datasets usually occur when (1) the pre-established goal(s) are not met and/or (2) the data collected from the established and emerging method are not comparable, resulting in a discontinuous dataset. While not meeting a pre-established goal is often straightforward (e.g., financial or labor cost was not reduced, the detection limit was not lowered, etc.), discontinuous datasets will compromise one's ability to capture ecological insights but can occur for a variety of reasons. For example, what was measured previously and what the new method captures may be representative of the same ecological process but are not the same measurement (e.g., algal chlorophyll a vs. total cell biovolume; Ramaraj et al. 2013). Furthermore, as emerging technologies increase sample throughput through automation, the scale of data collection may change dramatically. This can make statistical comparison between the established and emerging methods challenging (Cutter 2013). Finally, switching to a method that lowers the limits of quantification or detection can sometimes be straightforward to account for. However, in other cases, this may complicate comparisons between old and new methods. While the collectors of such data may appreciate and understand these changes, long-term datasets often serve a variety of different end-users, making the ability to capture ecological insights increasingly difficult.

Due to the numerous challenges associated with method switches (Fig. 1), it can be difficult to define a method switch as a success or a failure; rather, outcomes exist on a continuum. For example, while a method switch might be considered a “success” within its own long-term data collection program, it may pose challenges for other researchers aiming for methodological consistency between studies. Switching to more advanced technology might make it more difficult for other labs to replicate methodologies, reducing global access to and comparability among datasets. Furthermore, researchers may be motivated to repeatedly switch methods to capture the “best” data when a field is just establishing long-term datasets. Chasing “the best,” unfortunately, can lead to delays in establishing datasets that would benefit policy and regulation. A prime example of this is micro- and nanoplastics pollution research, which suffers from a lack of continuous datasets despite a decade of widespread interest in the topic (Lusher and Primpke 2023). Given these nuanced challenges, there are often many reasons to avoid method switching altogether.

To highlight method-switching successes in long-term datasets, we present case studies that fall into three common categories of method switching: (1) manual-to-manual, (2) automated-to-automated switching, and (3) manual-to-automated. Here, “manual” refers to methods where the majority of the method, analysis, and interpretation is carried out by a person (e.g., measuring Secchi disk depth or cell counting with light microscopy). Conversely, “automated” refers to methods where most of the method, analysis, and interpretation is carried out by a machine or an automated process (e.g., satellite imaging or flow cytometry).

Long-term aquatic datasets provide invaluable insights. However, maintaining their integrity amidst evolving methodologies poses challenges. This raises two considerations for dataset managers: whether to adopt emerging methodologies or maintain established techniques and how to ensure data integrity during a method transition. While the decision to switch methods is case-specific, our paper addresses the critical need for structured discussions on such switches and the development of standardized guidelines for transparent data reporting. With the aquatic sciences trending toward increasingly collaborative, interdisciplinary research that employs automated data collection methods and Big Data (Durden et al. 2017), dataset managers must deliberate on adapting their data collection methods to ensure continuous and effective monitoring of Earth's ecosystems.

Catriona L. C. Jones and Kelsey J. Solomon co-led the entire manuscript effort, contributing equally, and created the graphics. All authors contributed to the conceptualization of the essay topic and the writing and editing of the manuscript.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
10.00
自引率
3.80%
发文量
63
审稿时长
25 weeks
期刊介绍: Limnology and Oceanography Letters (LO-Letters) serves as a platform for communicating the latest innovative and trend-setting research in the aquatic sciences. Manuscripts submitted to LO-Letters are expected to present high-impact, cutting-edge results, discoveries, or conceptual developments across all areas of limnology and oceanography, including their integration. Selection criteria for manuscripts include their broad relevance to the field, strong empirical and conceptual foundations, succinct and elegant conclusions, and potential to advance knowledge in aquatic sciences.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信