{"title":"The Quality of Expertise","authors":"Edward Dickersin Van Wesep","doi":"10.2139/ssrn.2257995","DOIUrl":null,"url":null,"abstract":"Policy-makers and managers often turn to experts when in need of information: because they are more informed than others of the content and quality of current and past research, they should provide the best advice. I show, however, that we should expect experts to be systematically biased, potentially to the point that they are less reliable sources of information than non-experts. This is because the decision to research a question implies a belief that research will be fruitful. If priors about the impact of current work are correct, on average, then those who select into researching a question are optimistic about the quality of current work. In areas that are new, or feature new research technologies (e.g., data sources, technical methods, or paradigms), the selection problem is less important than the benefit of greater knowledge: experts will indeed be experts. In areas that are old and lack new research technologies, there will be significant bias. Furthermore, consistent with a large body of empirical research, this selection problem implies that experts who express greater confidence in their beliefs will be, on average, less accurate. This paper provides many empirical implications for expert accuracy, as well as mechanism design implications for hiring, task assignment, and referee assignment.","PeriodicalId":112243,"journal":{"name":"Vanderbilt University - Owen Graduate School of Management Research Paper Series","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Vanderbilt University - Owen Graduate School of Management Research Paper Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.2257995","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Policy-makers and managers often turn to experts when in need of information: because they are more informed than others of the content and quality of current and past research, they should provide the best advice. I show, however, that we should expect experts to be systematically biased, potentially to the point that they are less reliable sources of information than non-experts. This is because the decision to research a question implies a belief that research will be fruitful. If priors about the impact of current work are correct, on average, then those who select into researching a question are optimistic about the quality of current work. In areas that are new, or feature new research technologies (e.g., data sources, technical methods, or paradigms), the selection problem is less important than the benefit of greater knowledge: experts will indeed be experts. In areas that are old and lack new research technologies, there will be significant bias. Furthermore, consistent with a large body of empirical research, this selection problem implies that experts who express greater confidence in their beliefs will be, on average, less accurate. This paper provides many empirical implications for expert accuracy, as well as mechanism design implications for hiring, task assignment, and referee assignment.