Maia S. Kapur , Nicholas Ducharme-Barth , Megumi Oshima , Felipe Carvalho
{"title":"综合种群评估中模型诊断的良好做法、权衡和预防措施","authors":"Maia S. Kapur , Nicholas Ducharme-Barth , Megumi Oshima , Felipe Carvalho","doi":"10.1016/j.fishres.2024.107206","DOIUrl":null,"url":null,"abstract":"<div><div>Carvalho et al. (2021) provided a “cookbook” for implementing contemporary model diagnostics, which included convergence checks, examinations of fits to data, retrospective and hindcasting analyses, likelihood profiling, and model-free validation. However, it remains unclear whether these widely-used diagnostics exhibit consistent behavior in the presence of model misspecification, and whether there are trade-offs in diagnostic performance that the assessment community should consider. This illustrative study uses a statistical catch-at-age simulation framework to compare diagnostic performance across a spectrum of correctly specified and mis-specified assessment models that incorporate compositional, survey, and catch data. Results are used to contextualize how reliably common diagnostic tests perform given the degree and nature of known model issues, including parameter and model process misspecification, and combinations thereof, and trade-offs among model fits, prediction skill, and retrospective bias that analysts must consider as they evaluate diagnostic performance. A surprising number of mis-specified models were able to pass certain diagnostic tests, although there was a trend of more frequent failure with increased mis-specification for most diagnostic tests. Nearly all models that failed multiple tests were mis-specified, indicating the value of examining multiple diagnostics during model evaluation. Diagnostic performance was best (most sensitive) when recruitment variability was low and historical exploitation rates were high, likely due to the induction of better contrast in the data, particularly indices of abundance, under this scenario. These results suggest caution when using standalone diagnostic results as the basis for selecting a “best” assessment model, a set of models to include within an ensemble, or to inform model weighting. The discussion advises stock assessors to consider the interplay across multiple dynamics. Future work should evaluate how the resolution of the production function, quality and quantity of data time series, and exploitation history can influence diagnostic performance.</div></div>","PeriodicalId":50443,"journal":{"name":"Fisheries Research","volume":"281 ","pages":"Article 107206"},"PeriodicalIF":2.2000,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Good practices, trade-offs, and precautions for model diagnostics in integrated stock assessments\",\"authors\":\"Maia S. Kapur , Nicholas Ducharme-Barth , Megumi Oshima , Felipe Carvalho\",\"doi\":\"10.1016/j.fishres.2024.107206\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Carvalho et al. (2021) provided a “cookbook” for implementing contemporary model diagnostics, which included convergence checks, examinations of fits to data, retrospective and hindcasting analyses, likelihood profiling, and model-free validation. However, it remains unclear whether these widely-used diagnostics exhibit consistent behavior in the presence of model misspecification, and whether there are trade-offs in diagnostic performance that the assessment community should consider. This illustrative study uses a statistical catch-at-age simulation framework to compare diagnostic performance across a spectrum of correctly specified and mis-specified assessment models that incorporate compositional, survey, and catch data. Results are used to contextualize how reliably common diagnostic tests perform given the degree and nature of known model issues, including parameter and model process misspecification, and combinations thereof, and trade-offs among model fits, prediction skill, and retrospective bias that analysts must consider as they evaluate diagnostic performance. A surprising number of mis-specified models were able to pass certain diagnostic tests, although there was a trend of more frequent failure with increased mis-specification for most diagnostic tests. Nearly all models that failed multiple tests were mis-specified, indicating the value of examining multiple diagnostics during model evaluation. Diagnostic performance was best (most sensitive) when recruitment variability was low and historical exploitation rates were high, likely due to the induction of better contrast in the data, particularly indices of abundance, under this scenario. These results suggest caution when using standalone diagnostic results as the basis for selecting a “best” assessment model, a set of models to include within an ensemble, or to inform model weighting. The discussion advises stock assessors to consider the interplay across multiple dynamics. Future work should evaluate how the resolution of the production function, quality and quantity of data time series, and exploitation history can influence diagnostic performance.</div></div>\",\"PeriodicalId\":50443,\"journal\":{\"name\":\"Fisheries Research\",\"volume\":\"281 \",\"pages\":\"Article 107206\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Fisheries Research\",\"FirstCategoryId\":\"97\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0165783624002704\",\"RegionNum\":2,\"RegionCategory\":\"农林科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"FISHERIES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fisheries Research","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0165783624002704","RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"FISHERIES","Score":null,"Total":0}
Good practices, trade-offs, and precautions for model diagnostics in integrated stock assessments
Carvalho et al. (2021) provided a “cookbook” for implementing contemporary model diagnostics, which included convergence checks, examinations of fits to data, retrospective and hindcasting analyses, likelihood profiling, and model-free validation. However, it remains unclear whether these widely-used diagnostics exhibit consistent behavior in the presence of model misspecification, and whether there are trade-offs in diagnostic performance that the assessment community should consider. This illustrative study uses a statistical catch-at-age simulation framework to compare diagnostic performance across a spectrum of correctly specified and mis-specified assessment models that incorporate compositional, survey, and catch data. Results are used to contextualize how reliably common diagnostic tests perform given the degree and nature of known model issues, including parameter and model process misspecification, and combinations thereof, and trade-offs among model fits, prediction skill, and retrospective bias that analysts must consider as they evaluate diagnostic performance. A surprising number of mis-specified models were able to pass certain diagnostic tests, although there was a trend of more frequent failure with increased mis-specification for most diagnostic tests. Nearly all models that failed multiple tests were mis-specified, indicating the value of examining multiple diagnostics during model evaluation. Diagnostic performance was best (most sensitive) when recruitment variability was low and historical exploitation rates were high, likely due to the induction of better contrast in the data, particularly indices of abundance, under this scenario. These results suggest caution when using standalone diagnostic results as the basis for selecting a “best” assessment model, a set of models to include within an ensemble, or to inform model weighting. The discussion advises stock assessors to consider the interplay across multiple dynamics. Future work should evaluate how the resolution of the production function, quality and quantity of data time series, and exploitation history can influence diagnostic performance.
期刊介绍:
This journal provides an international forum for the publication of papers in the areas of fisheries science, fishing technology, fisheries management and relevant socio-economics. The scope covers fisheries in salt, brackish and freshwater systems, and all aspects of associated ecology, environmental aspects of fisheries, and economics. Both theoretical and practical papers are acceptable, including laboratory and field experimental studies relevant to fisheries. Papers on the conservation of exploitable living resources are welcome. Review and Viewpoint articles are also published. As the specified areas inevitably impinge on and interrelate with each other, the approach of the journal is multidisciplinary, and authors are encouraged to emphasise the relevance of their own work to that of other disciplines. The journal is intended for fisheries scientists, biological oceanographers, gear technologists, economists, managers, administrators, policy makers and legislators.