{"title":"Scoring Expert Forecasts","authors":"K. C. Lichtendahl, Y. Grushka-Cockayne","doi":"10.2139/ssrn.2975140","DOIUrl":null,"url":null,"abstract":"This technical note, based on the more comprehensive note, \"Eliciting and Evaluating Expert Forecasts\" (UVA-QA-0734), provides a streamlined presentation of Brier and log scores as tools for assessing forecasting records among a pool of experts. The note is designed to be used in conjunction with a forecasting exercise. \r\nExcerpt \r\nUVA-QA-0772 \r\nRev. Sept. 12, 2014 \r\nScoring Expert FORECASTS \r\nEvaluating the forecasts of others can be a difficult task. One approach is to score an expert's forecast once the realization of the uncertainty is known. A track record of high scores on multiple forecasts may yield important insights into the expertise an individual possesses. In this note, we describe several scoring rules for evaluating expert opinion. \r\nScoring Forecasts of Discrete Events \r\nScoring rules first appeared in the 1950s to evaluate meteorological forecasts. Since that time, scoring rules have found a wide variety of applications in business and other fields. To this day, meteorologists in the United States are evaluated using a Brier scoring rule. When a discrete uncertainty has only two possible outcomes (e.g., rain/no rain), the Brier scoring rule assigns a score of –(1 – p)2, where p is the probability forecast reported for the event that occurs. \r\n. . .","PeriodicalId":121773,"journal":{"name":"Darden Case: Business Communications (Topic)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Darden Case: Business Communications (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.2975140","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This technical note, based on the more comprehensive note, "Eliciting and Evaluating Expert Forecasts" (UVA-QA-0734), provides a streamlined presentation of Brier and log scores as tools for assessing forecasting records among a pool of experts. The note is designed to be used in conjunction with a forecasting exercise.
Excerpt
UVA-QA-0772
Rev. Sept. 12, 2014
Scoring Expert FORECASTS
Evaluating the forecasts of others can be a difficult task. One approach is to score an expert's forecast once the realization of the uncertainty is known. A track record of high scores on multiple forecasts may yield important insights into the expertise an individual possesses. In this note, we describe several scoring rules for evaluating expert opinion.
Scoring Forecasts of Discrete Events
Scoring rules first appeared in the 1950s to evaluate meteorological forecasts. Since that time, scoring rules have found a wide variety of applications in business and other fields. To this day, meteorologists in the United States are evaluated using a Brier scoring rule. When a discrete uncertainty has only two possible outcomes (e.g., rain/no rain), the Brier scoring rule assigns a score of –(1 – p)2, where p is the probability forecast reported for the event that occurs.
. . .