{"title":"对42项大型科技前瞻性德尔菲调查进行了批判性评价","authors":"Ian Belton, Kerstin Cuhls, George Wright","doi":"10.1002/ffo2.118","DOIUrl":null,"url":null,"abstract":"<p>Large-scale Delphi surveys on technology foresight started in the 1960s and involve an average of about 2000 participants answering, potentially, up to about 450 items. This contrasts sharply with the participation and content of the more common, smaller-scale Delphi surveys. Previously, Belton et al. developed “six steps” to underpin a well-founded and defensible Delphi process and we apply these steps in a novel evaluation of the quality of 42 large-scale technology foresight surveys. Using a detailed analysis of two exemplar studies and a content analysis of all 42 surveys, we explore whether such surveys differ systematically from “traditional” smaller-scale Delphi surveys and, if so, why this may be and what it may mean for the quality of data produced. We conclude that there are some problematic issues within these surveys—to do with (i) data quality in both the numerical summarizing of participant's between-round feedback and in the reporting of final round numerical responses, (ii) the infrequent elicitation of rationales to justify participants' proffered numerical responses, and, when such rationales are elicited, (iii) the between-round summary and presentation of the rationales. We speculate on the reasons for these design differences in the extant large-scale surveys and conclude that extra-survey political influences, such as differing objectives and the need to demonstrate wide-ranging expert participation, may be the underlying cause. We conclude with considerations and recommendations for the design of future large-scale Delphi surveys to enable the underlying process to become better-founded and more defensible to procedural evaluation.</p>","PeriodicalId":100567,"journal":{"name":"FUTURES & FORESIGHT SCIENCE","volume":"4 2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A critical evaluation of 42, large-scale, science and technology foresight Delphi surveys\",\"authors\":\"Ian Belton, Kerstin Cuhls, George Wright\",\"doi\":\"10.1002/ffo2.118\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Large-scale Delphi surveys on technology foresight started in the 1960s and involve an average of about 2000 participants answering, potentially, up to about 450 items. This contrasts sharply with the participation and content of the more common, smaller-scale Delphi surveys. Previously, Belton et al. developed “six steps” to underpin a well-founded and defensible Delphi process and we apply these steps in a novel evaluation of the quality of 42 large-scale technology foresight surveys. Using a detailed analysis of two exemplar studies and a content analysis of all 42 surveys, we explore whether such surveys differ systematically from “traditional” smaller-scale Delphi surveys and, if so, why this may be and what it may mean for the quality of data produced. We conclude that there are some problematic issues within these surveys—to do with (i) data quality in both the numerical summarizing of participant's between-round feedback and in the reporting of final round numerical responses, (ii) the infrequent elicitation of rationales to justify participants' proffered numerical responses, and, when such rationales are elicited, (iii) the between-round summary and presentation of the rationales. We speculate on the reasons for these design differences in the extant large-scale surveys and conclude that extra-survey political influences, such as differing objectives and the need to demonstrate wide-ranging expert participation, may be the underlying cause. We conclude with considerations and recommendations for the design of future large-scale Delphi surveys to enable the underlying process to become better-founded and more defensible to procedural evaluation.</p>\",\"PeriodicalId\":100567,\"journal\":{\"name\":\"FUTURES & FORESIGHT SCIENCE\",\"volume\":\"4 2\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"FUTURES & FORESIGHT SCIENCE\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ffo2.118\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"FUTURES & FORESIGHT SCIENCE","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ffo2.118","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A critical evaluation of 42, large-scale, science and technology foresight Delphi surveys
Large-scale Delphi surveys on technology foresight started in the 1960s and involve an average of about 2000 participants answering, potentially, up to about 450 items. This contrasts sharply with the participation and content of the more common, smaller-scale Delphi surveys. Previously, Belton et al. developed “six steps” to underpin a well-founded and defensible Delphi process and we apply these steps in a novel evaluation of the quality of 42 large-scale technology foresight surveys. Using a detailed analysis of two exemplar studies and a content analysis of all 42 surveys, we explore whether such surveys differ systematically from “traditional” smaller-scale Delphi surveys and, if so, why this may be and what it may mean for the quality of data produced. We conclude that there are some problematic issues within these surveys—to do with (i) data quality in both the numerical summarizing of participant's between-round feedback and in the reporting of final round numerical responses, (ii) the infrequent elicitation of rationales to justify participants' proffered numerical responses, and, when such rationales are elicited, (iii) the between-round summary and presentation of the rationales. We speculate on the reasons for these design differences in the extant large-scale surveys and conclude that extra-survey political influences, such as differing objectives and the need to demonstrate wide-ranging expert participation, may be the underlying cause. We conclude with considerations and recommendations for the design of future large-scale Delphi surveys to enable the underlying process to become better-founded and more defensible to procedural evaluation.