Peter K. Enns, Colleen L. Barry, James N. Druckman, Sergio Garcia-Rios, David C. Wilson, Jonathon P. Schuldt
{"title":"需要定期开展大规模基准调查,以持续评估抽样方法和管理模式:2022 年合作中期调查的经验教训","authors":"Peter K. Enns, Colleen L. Barry, James N. Druckman, Sergio Garcia-Rios, David C. Wilson, Jonathon P. Schuldt","doi":"arxiv-2407.06090","DOIUrl":null,"url":null,"abstract":"As survey methods adapt to technological and societal changes, a growing body\nof research seeks to understand the tradeoffs associated with various sampling\nmethods and administration modes. We show how the NSF-funded 2022 Collaborative\nMidterm Survey (CMS) can be used as a dynamic and transparent framework for\nevaluating which sampling approaches - or combination of approaches - are best\nsuited for various research goals. The CMS is ideally suited for this purpose\nbecause it includes almost 20,000 respondents interviewed using two\nadministration modes (phone and online) and data drawn from random digit\ndialing, random address-based sampling, a probability-based panel, two\nnonprobability panels, and two nonprobability marketplaces. The analysis\nconsiders three types of population benchmarks (election data, administrative\nrecords, and large government surveys) and focuses on the national-level\nestimates as well as oversamples in three states (California, Florida, and\nWisconsin). In addition to documenting how each of the survey strategies\nperformed, we develop a strategy to assess how different combinations of\napproaches compare to different population benchmarks in order to guide\nresearchers combining sampling methods and sources. We conclude by providing\nspecific recommendations to public opinion and election survey researchers and\ndemonstrating how our approach could be applied to a large government survey\nconducted at regular intervals to provide ongoing guidance to researchers,\ngovernment, businesses, and nonprofits regarding the most appropriate survey\nsampling and administration methods.","PeriodicalId":501323,"journal":{"name":"arXiv - STAT - Other Statistics","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Need for a Recurring Large-Scale Benchmarking Survey to Continually Evaluate Sampling Methods and Administration Modes: Lessons from the 2022 Collaborative Midterm Survey\",\"authors\":\"Peter K. Enns, Colleen L. Barry, James N. Druckman, Sergio Garcia-Rios, David C. Wilson, Jonathon P. Schuldt\",\"doi\":\"arxiv-2407.06090\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As survey methods adapt to technological and societal changes, a growing body\\nof research seeks to understand the tradeoffs associated with various sampling\\nmethods and administration modes. We show how the NSF-funded 2022 Collaborative\\nMidterm Survey (CMS) can be used as a dynamic and transparent framework for\\nevaluating which sampling approaches - or combination of approaches - are best\\nsuited for various research goals. The CMS is ideally suited for this purpose\\nbecause it includes almost 20,000 respondents interviewed using two\\nadministration modes (phone and online) and data drawn from random digit\\ndialing, random address-based sampling, a probability-based panel, two\\nnonprobability panels, and two nonprobability marketplaces. The analysis\\nconsiders three types of population benchmarks (election data, administrative\\nrecords, and large government surveys) and focuses on the national-level\\nestimates as well as oversamples in three states (California, Florida, and\\nWisconsin). In addition to documenting how each of the survey strategies\\nperformed, we develop a strategy to assess how different combinations of\\napproaches compare to different population benchmarks in order to guide\\nresearchers combining sampling methods and sources. We conclude by providing\\nspecific recommendations to public opinion and election survey researchers and\\ndemonstrating how our approach could be applied to a large government survey\\nconducted at regular intervals to provide ongoing guidance to researchers,\\ngovernment, businesses, and nonprofits regarding the most appropriate survey\\nsampling and administration methods.\",\"PeriodicalId\":501323,\"journal\":{\"name\":\"arXiv - STAT - Other Statistics\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - STAT - Other Statistics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.06090\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Other Statistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.06090","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The Need for a Recurring Large-Scale Benchmarking Survey to Continually Evaluate Sampling Methods and Administration Modes: Lessons from the 2022 Collaborative Midterm Survey
As survey methods adapt to technological and societal changes, a growing body
of research seeks to understand the tradeoffs associated with various sampling
methods and administration modes. We show how the NSF-funded 2022 Collaborative
Midterm Survey (CMS) can be used as a dynamic and transparent framework for
evaluating which sampling approaches - or combination of approaches - are best
suited for various research goals. The CMS is ideally suited for this purpose
because it includes almost 20,000 respondents interviewed using two
administration modes (phone and online) and data drawn from random digit
dialing, random address-based sampling, a probability-based panel, two
nonprobability panels, and two nonprobability marketplaces. The analysis
considers three types of population benchmarks (election data, administrative
records, and large government surveys) and focuses on the national-level
estimates as well as oversamples in three states (California, Florida, and
Wisconsin). In addition to documenting how each of the survey strategies
performed, we develop a strategy to assess how different combinations of
approaches compare to different population benchmarks in order to guide
researchers combining sampling methods and sources. We conclude by providing
specific recommendations to public opinion and election survey researchers and
demonstrating how our approach could be applied to a large government survey
conducted at regular intervals to provide ongoing guidance to researchers,
government, businesses, and nonprofits regarding the most appropriate survey
sampling and administration methods.