对42项大型科技前瞻性德尔菲调查进行了批判性评价

Ian Belton, Kerstin Cuhls, George Wright
{"title":"对42项大型科技前瞻性德尔菲调查进行了批判性评价","authors":"Ian Belton,&nbsp;Kerstin Cuhls,&nbsp;George Wright","doi":"10.1002/ffo2.118","DOIUrl":null,"url":null,"abstract":"<p>Large-scale Delphi surveys on technology foresight started in the 1960s and involve an average of about 2000 participants answering, potentially, up to about 450 items. This contrasts sharply with the participation and content of the more common, smaller-scale Delphi surveys. Previously, Belton et al. developed “six steps” to underpin a well-founded and defensible Delphi process and we apply these steps in a novel evaluation of the quality of 42 large-scale technology foresight surveys. Using a detailed analysis of two exemplar studies and a content analysis of all 42 surveys, we explore whether such surveys differ systematically from “traditional” smaller-scale Delphi surveys and, if so, why this may be and what it may mean for the quality of data produced. We conclude that there are some problematic issues within these surveys—to do with (i) data quality in both the numerical summarizing of participant's between-round feedback and in the reporting of final round numerical responses, (ii) the infrequent elicitation of rationales to justify participants' proffered numerical responses, and, when such rationales are elicited, (iii) the between-round summary and presentation of the rationales. We speculate on the reasons for these design differences in the extant large-scale surveys and conclude that extra-survey political influences, such as differing objectives and the need to demonstrate wide-ranging expert participation, may be the underlying cause. We conclude with considerations and recommendations for the design of future large-scale Delphi surveys to enable the underlying process to become better-founded and more defensible to procedural evaluation.</p>","PeriodicalId":100567,"journal":{"name":"FUTURES & FORESIGHT SCIENCE","volume":"4 2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A critical evaluation of 42, large-scale, science and technology foresight Delphi surveys\",\"authors\":\"Ian Belton,&nbsp;Kerstin Cuhls,&nbsp;George Wright\",\"doi\":\"10.1002/ffo2.118\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Large-scale Delphi surveys on technology foresight started in the 1960s and involve an average of about 2000 participants answering, potentially, up to about 450 items. This contrasts sharply with the participation and content of the more common, smaller-scale Delphi surveys. Previously, Belton et al. developed “six steps” to underpin a well-founded and defensible Delphi process and we apply these steps in a novel evaluation of the quality of 42 large-scale technology foresight surveys. Using a detailed analysis of two exemplar studies and a content analysis of all 42 surveys, we explore whether such surveys differ systematically from “traditional” smaller-scale Delphi surveys and, if so, why this may be and what it may mean for the quality of data produced. We conclude that there are some problematic issues within these surveys—to do with (i) data quality in both the numerical summarizing of participant's between-round feedback and in the reporting of final round numerical responses, (ii) the infrequent elicitation of rationales to justify participants' proffered numerical responses, and, when such rationales are elicited, (iii) the between-round summary and presentation of the rationales. We speculate on the reasons for these design differences in the extant large-scale surveys and conclude that extra-survey political influences, such as differing objectives and the need to demonstrate wide-ranging expert participation, may be the underlying cause. We conclude with considerations and recommendations for the design of future large-scale Delphi surveys to enable the underlying process to become better-founded and more defensible to procedural evaluation.</p>\",\"PeriodicalId\":100567,\"journal\":{\"name\":\"FUTURES & FORESIGHT SCIENCE\",\"volume\":\"4 2\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"FUTURES & FORESIGHT SCIENCE\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ffo2.118\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"FUTURES & FORESIGHT SCIENCE","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ffo2.118","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

关于技术预见的大规模德尔菲调查始于20世纪60年代,平均约有2000名参与者回答了大约450个问题。这与更常见的小规模德尔菲调查的参与和内容形成鲜明对比。此前,Belton等人开发了“六个步骤”来支撑一个有充分基础和可辩护的德尔菲过程,我们将这些步骤应用于对42项大规模技术预测调查的质量的新评估。通过对两项范例研究的详细分析和对所有42项调查的内容分析,我们探讨了这些调查是否与“传统的”小规模德尔菲调查有系统的不同,如果有,为什么会有这种不同,以及它对所产生的数据质量意味着什么。我们得出的结论是,这些调查中存在一些问题——与(i)参与者的轮间反馈的数字总结和最后一轮数字回答的报告中的数据质量有关,(ii)很少引出证明参与者提供的数字回答的理由,并且,当这些理由被引出时,(iii)轮间总结和基本理由的呈现。我们推测了现有大规模调查中这些设计差异的原因,并得出结论,调查外的政治影响,如不同的目标和需要展示广泛的专家参与,可能是潜在的原因。最后,我们对未来大规模德尔菲调查的设计提出了考虑和建议,以使潜在的过程变得更有根据,更经得起程序性评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A critical evaluation of 42, large-scale, science and technology foresight Delphi surveys

Large-scale Delphi surveys on technology foresight started in the 1960s and involve an average of about 2000 participants answering, potentially, up to about 450 items. This contrasts sharply with the participation and content of the more common, smaller-scale Delphi surveys. Previously, Belton et al. developed “six steps” to underpin a well-founded and defensible Delphi process and we apply these steps in a novel evaluation of the quality of 42 large-scale technology foresight surveys. Using a detailed analysis of two exemplar studies and a content analysis of all 42 surveys, we explore whether such surveys differ systematically from “traditional” smaller-scale Delphi surveys and, if so, why this may be and what it may mean for the quality of data produced. We conclude that there are some problematic issues within these surveys—to do with (i) data quality in both the numerical summarizing of participant's between-round feedback and in the reporting of final round numerical responses, (ii) the infrequent elicitation of rationales to justify participants' proffered numerical responses, and, when such rationales are elicited, (iii) the between-round summary and presentation of the rationales. We speculate on the reasons for these design differences in the extant large-scale surveys and conclude that extra-survey political influences, such as differing objectives and the need to demonstrate wide-ranging expert participation, may be the underlying cause. We conclude with considerations and recommendations for the design of future large-scale Delphi surveys to enable the underlying process to become better-founded and more defensible to procedural evaluation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.00
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信