{"title":"Uncertainty in case of lack of information: extrapolating data over time, with examples of climate forecast models","authors":"F. Pavese","doi":"10.24027/2306-7039.3.2022.269537","DOIUrl":null,"url":null,"abstract":"The basic scientific tool for predicting is called a “forecast model”, a mathematical model underpinned by observations. Generally, it is the evolution of some parameters of the present-day law(s) over time that are considered of fundamental importance in a specific case. The relevant available data are obviously limited to the past period of time, which is admittedly a limited period in most cases, when the law in question is considered valid and verified with sufficient precision − while no direct information is available about the future trend. A mathematical (set of) function(s) is extrapolated ahead over time to show present and next generations what they should be supposed to observe in the future. A problem arises from the fact that no (set of) mathematical function that could be used for a model is infinitely “flexible”, i.e. apt to “correctly” interpolate any cluster of data, and the less a data set is, the less the parameters of the function(s) are. A data consistency is considered good when there is a balance between a mere “copying” the behaviour over time (e.g. when a function has to follow a given profile) and a satisfactory “averaging” the behaviour, especially over longer periods of time, without “masking” changing points. Furthermore, the data uncertainty is an embellishment, which the information often lacks, provided with extrapolations. Instead of it, the data uncertainty must be taken into account, and appropriate information must always be provided, since the quality of the adjustment of the available data is crucial for the quality of the subsequent extrapolation. Therefore, the forecast should better consist of an area (typically increasing its width over time) where future determinations are assumed to fall within a given probability range. Thus, it should be perfectly clear that the extrapolation of the past data into the future, i.e. a current evaluation that can be propagated to next generations, is affected by a high risk and that careful precautions and limitations should be taken.","PeriodicalId":40775,"journal":{"name":"Ukrainian Metrological Journal","volume":null,"pages":null},"PeriodicalIF":0.1000,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ukrainian Metrological Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24027/2306-7039.3.2022.269537","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
引用次数: 1
Abstract
The basic scientific tool for predicting is called a “forecast model”, a mathematical model underpinned by observations. Generally, it is the evolution of some parameters of the present-day law(s) over time that are considered of fundamental importance in a specific case. The relevant available data are obviously limited to the past period of time, which is admittedly a limited period in most cases, when the law in question is considered valid and verified with sufficient precision − while no direct information is available about the future trend. A mathematical (set of) function(s) is extrapolated ahead over time to show present and next generations what they should be supposed to observe in the future. A problem arises from the fact that no (set of) mathematical function that could be used for a model is infinitely “flexible”, i.e. apt to “correctly” interpolate any cluster of data, and the less a data set is, the less the parameters of the function(s) are. A data consistency is considered good when there is a balance between a mere “copying” the behaviour over time (e.g. when a function has to follow a given profile) and a satisfactory “averaging” the behaviour, especially over longer periods of time, without “masking” changing points. Furthermore, the data uncertainty is an embellishment, which the information often lacks, provided with extrapolations. Instead of it, the data uncertainty must be taken into account, and appropriate information must always be provided, since the quality of the adjustment of the available data is crucial for the quality of the subsequent extrapolation. Therefore, the forecast should better consist of an area (typically increasing its width over time) where future determinations are assumed to fall within a given probability range. Thus, it should be perfectly clear that the extrapolation of the past data into the future, i.e. a current evaluation that can be propagated to next generations, is affected by a high risk and that careful precautions and limitations should be taken.