Survey practicePub Date : 2023-11-02DOI: 10.29115/sp-2023-0018
Kyle J Morgan, Jessica L. Roman, Debra Borie-Holtz, Ashley Koning, Madison Holtz
{"title":"So You Want to Survey a State Legislator? Call Me, Maybe","authors":"Kyle J Morgan, Jessica L. Roman, Debra Borie-Holtz, Ashley Koning, Madison Holtz","doi":"10.29115/sp-2023-0018","DOIUrl":"https://doi.org/10.29115/sp-2023-0018","url":null,"abstract":"Based on legislation passed in 2021, we undertook a first-in-the-nation effort to survey elected and appointed officials in the state of New Jersey to collect basic demographic information. This survey ran into challenges that all surveys encounter, namely, how to reach respondents and how to get them to then complete the survey—especially with an elite population. An initial recruitment effort using official, and publicly available, email addresses yielded a low response rate, requiring us to revise our recruitment strategy. We settled on directly calling the offices of the New Jersey Assembly and Senate members to enlist the support for their Chiefs of Staff and legislative aides to assist us in getting the survey completed. Directly contacting the offices in this way more than doubled the response rates compared to email. Researchers should be mindful of the benefit of this recruitment mode in future efforts to survey legislators and be attentive to the additional costs and time associated with it.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"7 19","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135974794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-10-12DOI: 10.29115/sp-2023-0014
Paul J. Lavrakas, Sebastian Kocar
{"title":"A Low-cost Method to Try to Improve Panel Survey Representation","authors":"Paul J. Lavrakas, Sebastian Kocar","doi":"10.29115/sp-2023-0014","DOIUrl":"https://doi.org/10.29115/sp-2023-0014","url":null,"abstract":"Survey researchers are perpetually faced with the challenge of trying to balance quality, time, and cost. However, it is essentially impossible for surveys to achieve high quality, a quick turnaround time, and low costs. So, most often researchers settle for trying to achieve two of these three. As survey costs have escalated, funders have become unwilling to keep spending more on surveys, and what often happens is that quality is compromised. It is within this context that we propose a low-cost approach to improve panel survey quality by increasing a panel’s representation of its target population. This includes both the quality of the initial sample recruited to join the panel and the sample that remains active within the panel. Our approach addresses a way to raise the representativeness of initial and on-going panel samples essentially without raising on-going costs. We present a case study of this approach, using the Life in Australia™ panel. We began by asking 1,557 panel members an open-ended question about why they joined and remained in the panel. We then content analyzed the responses to create quantitative data which could be analyzed statistically. Also, we gathered all the communications used with these panel members and performed a qualitative content analysis to identify the themes used to try to persuade the sampled panelists to join and stay active in the panel. We then compared the two sets of findings. There were six motivations that panelists reported that were not included in the recruitment and maintenance communications that were used with potential panelists and currently inactive panelists. We also found that panelists with certain background characteristics were more likely to report certain motivations for being in the panel. We acknowledge that our approach is not a panacea but believe it adds to the “toolbox” of panel companies.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136013868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-10-05DOI: 10.29115/sp-2023-0017
Heather Ridolfo, Ashley Thompson
{"title":"Conducting Small-Scale Multimethod Questionnaire Evaluation","authors":"Heather Ridolfo, Ashley Thompson","doi":"10.29115/sp-2023-0017","DOIUrl":"https://doi.org/10.29115/sp-2023-0017","url":null,"abstract":"When pretesting survey questionnaires, there are benefits to using a multimethod approach. However, using multiple methods can be cost prohibitive. In 2021, the National Agricultural Statistics Service (NASS) received feedback from stakeholders regarding concerns about double counting of grain stocks in the Agricultural Survey and the Grain Stocks Report. Data from these two surveys are used to produce the Grain Stocks publication, which is a principal Federal Economic Indicator publication. To address stakeholder feedback, NASS evaluated these two multimode surveys, in a short timeframe, with limited resources. Despite these limitations, NASS utilized expert review, cognitive testing, usability testing and behavior coding to evaluate these two survey questionnaires. This paper demonstrates that multimethod pretesting can be done effectively on a low-cost, small-scale basis.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"301 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135483324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-09-28DOI: 10.29115/sp-2023-0016
Kathryn Peebles, Michael Witt, Jo-Anne Caton, Michael L Sanderson, Sharon E Perlman, Steven Fernandez, Sarah E Dumas, Karen A Alroy, Andrew Burkey, Nicholas Ruther, John Sokolowski, R. Charon Gwynn, L Hannah Gould, Amber Levanon Seligson
{"title":"Population-based estimates of COVID-19 period prevalence and cumulative monthly incidence in New York City: A comparison of estimates from three surveys, July–August 2020","authors":"Kathryn Peebles, Michael Witt, Jo-Anne Caton, Michael L Sanderson, Sharon E Perlman, Steven Fernandez, Sarah E Dumas, Karen A Alroy, Andrew Burkey, Nicholas Ruther, John Sokolowski, R. Charon Gwynn, L Hannah Gould, Amber Levanon Seligson","doi":"10.29115/sp-2023-0016","DOIUrl":"https://doi.org/10.29115/sp-2023-0016","url":null,"abstract":"Background Diagnosis-based surveillance of COVID-19 underestimates COVID-19 burden. Questions about COVID-19-consistent symptoms were added to three population-based surveys to obtain representative estimates of COVID-19 period prevalence and monthly cumulative incidence. Objective To evaluate if estimates of COVID-19 period prevalence and cumulative monthly incidence differed when collected from surveys with different sampling frames and modes of administration. Methods Data were collected from adult New York City (NYC) residents via the Community Health Survey (CHS) (sampling frame: random digit dial with dual landline and cellphone frame; mode: phone) and the Citywide Mobility Survey (CMS) (sampling frame: probabilistically selected panel; mode: online) in July 2020 and via CHS and Healthy NYC (sampling frame: probabilistically selected panel; mode: online and phone) in August 2020. Persons with COVID-19-like illness (CLI) were identified based on reported symptoms in the past 30 days. To obtain COVID-19 estimates, CLI estimates were adjusted by the proportion of laboratory-confirmed SARS-CoV-2 infections among citywide emergency department CLI visits in which patients received SARS-CoV-2 testing. We used t-tests to compare estimated CLI period prevalence in July 2020 between CHS and CMS and CLI period prevalence and cumulative monthly incidence in August 2020 between CHS and Healthy NYC. Results CLI period prevalence was similar between CHS and CMS during July (12.2% vs. 9.9%, respectively, p=0.511); COVID-19 period prevalence was 1.7% and 1.3%, respectively. In contrast, CLI period prevalence was higher per Healthy NYC during August 2020 than CHS (18.1% vs. 11.3%, p=0.014); COVID-19 period prevalence was 0.7% and 0.4%, respectively. CLI cumulative monthly incidence in August was similar (5.7% and 4.0%, respectively; p=0.246) in both surveys. Conclusions Because estimates of CLI were not consistently different by sampling frame or mode of administration, additional research to understand the cause of differences between CHS and Healthy NYC can support use of symptom-based surveillance to monitor COVID-19 trends.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135426448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-09-21DOI: 10.29115/sp-2023-0015
Ting Yan
{"title":"Which Scale Direction is More Difficult for Respondents to Use? An Eye-tracking Study","authors":"Ting Yan","doi":"10.29115/sp-2023-0015","DOIUrl":"https://doi.org/10.29115/sp-2023-0015","url":null,"abstract":"Scale direction is found to affect response distributions by yielding more selections of scale points closer to the beginning of the scale. Although this phenomenon has been empirically demonstrated in different modes of data collection for respondents with different demographic characteristics, it is not clear which scale direction is cognitively difficult for respondents to use. Eye-tracking is used to address this research question because it provides a direct window into how respondents process survey questions. I compared dilation and fixation measures across scale direction and found that the satisfaction scale and the frequency scale running from “Never” to “Very Often” had more and longer fixations. In addition, the dissatisfaction options incurred larger dilations, more peak dilations, and more and longer fixations than the other parts of the scale. Findings of this paper shed light on how respondents process response scales and have practical implications for questionnaire design. Furthermore, findings of this paper demonstrate the utility and potential of using eye-tracking in general and dilation measures in particular to understand cognitive burden of answering survey questions.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136237633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-09-14DOI: 10.29115/sp-2023-0013
Karen Kirtland, William Garvin, Ting Yan, Michelle Cavazos, Marcus Berzofsky, Naomi Freedner, Brenna Muldavin, Burton Levine, Sonya Gamble, Machell Town
{"title":"Outcomes of Population Surveillance Data Collection Pilots and the Behavioral Risk Factor Surveillance System: What Happens in Texas.","authors":"Karen Kirtland, William Garvin, Ting Yan, Michelle Cavazos, Marcus Berzofsky, Naomi Freedner, Brenna Muldavin, Burton Levine, Sonya Gamble, Machell Town","doi":"10.29115/sp-2023-0013","DOIUrl":"10.29115/sp-2023-0013","url":null,"abstract":"<p><p>Declining response rates and rising costs have prompted the search for alternatives to traditional random-digit dialing (RDD) interviews. In 2021, three Behavioral Risk Factor Surveillance System (BRFSS) pilots were conducted in Texas: data collection using an RDD short message service (RDD SMS) text-messaging push-to-web pilot, an address-based push-to-web pilot, and an internet panel pilot. We used data from the three pilots and from the concurrent Texas BRFSS Computer Assisted Telephone Interview (CATI). We compared unweighted data from these four sources to demographic information from the American Community Survey (ACS) for Texas, comparing respondents' health information across the protocols as well as cost and response rates. Non-Hispanic White adults and college graduates disproportionately responded in all survey protocols. Comparing costs across protocols was difficult due to the differences in methods and overhead, but some cost comparisons could be made. The cost per complete for BRFSS/CATI ranged from $75 to $100, compared with costs per complete for address-based sampling ($31 to $39), RDD SMS ($12 to $20), and internet panel (approximately $25). There were notable differences among survey protocols and the ACS in age, race/ethnicity, education, and marital status. We found minimal differences in respondents' answers to heart disease-related questions; however, responses to flu vaccination questions differed by protocol. Comparable responses were encouraging. Properly weighted web-based data collection may help use data collected by new protocols as a supplement to future BRFSS efforts.</p>","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"16 1","pages":"1-12"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10518851/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41174347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-08-31DOI: 10.29115/sp-2023-0012
Michael Soszynski, Ryan Bliss
{"title":"Demographic and Measurement Differences between Text-to-Web and Phone Survey Respondents","authors":"Michael Soszynski, Ryan Bliss","doi":"10.29115/sp-2023-0012","DOIUrl":"https://doi.org/10.29115/sp-2023-0012","url":null,"abstract":"This paper builds on existing literature on survey mode effects. We explore the relationship between administration mode and demographics as well as measurement issues. Participants in a low-income home weatherization program were assigned to either a phone call or text-to-web survey administration group. Our findings appear to be consistent with previous research regarding both non-observation and observation effects to varying extents. In terms of non-observation mode effects, we found that text-to-web and phone group survey respondents had similar demographic and home characteristics. The two survey methods yielded similar response rates and minimal statistically significant differences between respondents’ reported background characteristics. We found a larger portion of phone respondents chose “Prefer not to say” for some demographic questions and generally indicated higher satisfaction than text-to-web respondents.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49215993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-08-24DOI: 10.29115/sp-2023-0008
T. Emery, S. Cabaço, Luisa Fadel, P. Lugtig, V. Toepoel, Almut Schumann, Detlev Lück, M. Bujard
{"title":"Breakoffs in an hour-long, online survey","authors":"T. Emery, S. Cabaço, Luisa Fadel, P. Lugtig, V. Toepoel, Almut Schumann, Detlev Lück, M. Bujard","doi":"10.29115/sp-2023-0008","DOIUrl":"https://doi.org/10.29115/sp-2023-0008","url":null,"abstract":"We analyze paradata from an online pilot of the Generations and Gender Programme (GGP) conducted in three countries (Croatia, Germany, and Portugal) to understand the extent, timing, and patterns in breakoffs during a long online survey. The GGP is notable as an online survey given that the median length of a face-to-face interview is 52 minutes, and the survey was initially designed for face-to-face. Paradata was collected for 3,378 web surveys. Breakoffs before the questionnaire was completed occurred in 17% of these surveys. The analysis uses Cox regression models to explore the timing of breakoffs and the influence of contextual factors. The results indicate that the breakoff hazard does not increase or decrease across the length of the questionnaire. The risk of breakoff does vary considerably across countries, between genders, and also by partnership status. Respondents are twice as likely to breakoff on a loop question, and respondents completing the survey on a smartphone are 2.6 times as likely to breakoff as those using a tablet or PC. Respondents receiving a conditional incentive were 65% less likely to breakoff than those who did not. The lessons from this work can help inform future strategies converting existing long, cross-national face-to-face studies into an online format.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48134800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-08-10DOI: 10.29115/sp-2023-0009
Blazej Palat, Marion Elie, Selma Bendjaballah, Guillaume Garcia, N. Sauger
{"title":"Give them a call! About the importance of call-back strategies in panel surveys","authors":"Blazej Palat, Marion Elie, Selma Bendjaballah, Guillaume Garcia, N. Sauger","doi":"10.29115/sp-2023-0009","DOIUrl":"https://doi.org/10.29115/sp-2023-0009","url":null,"abstract":"Building on an experiment introduced in the French probabilistic web panel Longitudinal Internet Studies for Social Sciences (ELIPSS), this paper provides estimates of the effect of a callback strategy of nonrespondents to specific waves to increase overall participation. Comparing groups in accordance with their previous pattern of participation and a treatment deciding whether a telephone callback was implemented in case of nonresponse, we test the conditionality of the callback effect on previous participation patterns. The panellists’ probability of response decreased proportionally to the number of studies missed out, while the motivating effect of telephone callbacks seemed independent from this factor. Hence, this paper lends credence to the assumption that the effectiveness of callback strategies is quite stable irrespective of the panellists’ level of commitment.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47948268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-08-03DOI: 10.29115/sp-2023-0010
Kevin Tolliver, J. Fields, Renee Stepler, A. Williams
{"title":"Case Prioritization in the SIPP: A Five-Year Review","authors":"Kevin Tolliver, J. Fields, Renee Stepler, A. Williams","doi":"10.29115/sp-2023-0010","DOIUrl":"https://doi.org/10.29115/sp-2023-0010","url":null,"abstract":"Case prioritization is a concerted effort to achieve high data quality during data collection by reallocating scarce resources to the sample cases that need them the most. Our research, conducted as part of the Survey of Income and Program Participation (SIPP), examines the effect of case prioritization over the changing landscape of this survey. Our findings suggest that since instituting case prioritization, field interviewers exert more effort on higher priority cases, and continual centralized monitoring and intervention have led to improvements in data quality measures, in particular R-indicator and coefficient of variation (CV) of response propensities.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41823486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}