Survey practicePub Date : 2023-07-27DOI: 10.29115/sp-2023-0007
Hanyu Sun, Ting Yan
{"title":"Applying Machine Learning to the Evaluation of Interviewer Performance","authors":"Hanyu Sun, Ting Yan","doi":"10.29115/sp-2023-0007","DOIUrl":"https://doi.org/10.29115/sp-2023-0007","url":null,"abstract":"Survey organizations have long used Computer Assisted Recorded Interviewing (CARI) to monitor interviewer performance. Conventionally, a human coder needs to first listen to the audio recording of the interactions between the interviewer and the respondent and then evaluate and code features of the question-and-answer sequence using a pre-specified coding scheme. Although prior research found that providing feedback to interviewers based on CARI was effective at improving interviewer performance, such coding process tends to be labor intensive and time consuming. To improve the effectiveness and efficiency of using CARI to monitor interviewer performance, we developed a pipeline that heavily draws on the use of machine learning to process audio recorded interviews. In particular, machine learning is used to detect who spoke at which turn in a question-level audio recording and to transcribe conversations at the turn level. This paper describes how the pipeline was used to detect interviewer falsification and to identify problematic interviewer behavior in both recordings of mock interviews and actual field interviews. The performance of the pipeline was discussed.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48401974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-06-15DOI: 10.29115/sp-2023-0005
Yani Zhao, Sherice Gearhart
{"title":"Comparing Amazon’s MTurk and a Sona Student Sample: A Test of Data Quality Using Attention and Manipulation Checks","authors":"Yani Zhao, Sherice Gearhart","doi":"10.29115/sp-2023-0005","DOIUrl":"https://doi.org/10.29115/sp-2023-0005","url":null,"abstract":"The need for cost-effective data collection leads researchers to explore options, especially for respondent-administered online surveys. Student samples are convenient and cheap for social scientists when students fit the target population. However, student samples are criticized for their homogeneity and lack of generalizability (Kees et al. 2017). Another low-cost option is Amazon Mechanical Turk (MTurk), a crowdsourcing platform used for collecting online data from a seemingly broader population. Despite the appeal, it is important to compare data quality. The purpose here is to compare data quality between MTurk and student samples. To control data quality, researchers rely on several tactics such as screener questions to exclude unqualified respondents (Arndt et al. 2022). Subjective manipulation check and attention-check are used to examine respondent engagement and performance. Completion speed might also indicate effort/attention. Since samples should collect data from participants resembling the target population, sample diversity also serves as an indicator of data quality in this study (Kees et al. 2017; Roulin 2015). However, it should be noted that having a diverse sample does not always guarantee higher sample quality, especially when conducting studies on a homogeneous population.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42525005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-05-18DOI: 10.29115/sp-2023-0004
Taylor Lewis, Joseph McMichael, Charlotte Looby
{"title":"Mail to One or Mail to All? An Experiment (Sub)Sampling Drop Point Units in a Self-Administered Address-Based Sampling Frame Survey","authors":"Taylor Lewis, Joseph McMichael, Charlotte Looby","doi":"10.29115/sp-2023-0004","DOIUrl":"https://doi.org/10.29115/sp-2023-0004","url":null,"abstract":"Practitioners utilizing an address-based sampling frame for a self-administered, mail contact survey must decide on how to handle drop points, which are single delivery points or receptacles that service multiple households. A variety of strategies have been adopted, including sampling all units at the drop point or subsampling just one (or a portion) of them. This paper reports results from an experiment fielded during the 2021 Healthy Chicago Survey aimed at providing insight into whether there are any substantive differences between these approaches. We find that a subsampling strategy in which a single mailing is sent produces a roughly 3 percentage point higher response rate relative to a strategy sending multiple mailings concurrently to the drop point. While base-weighted distributions of gender and age differed enough to be statistically significant, there were no noteworthy differences across other demographics or across the base-weighted distributions of select key health outcomes measured by the survey. Taken together, these results provide some evidence that a “mail to one” drop point strategy is more efficient than a “mail to all” drop point strategy.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44977194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-04-26DOI: 10.29115/sp-2023-0003
Rachel Suss, Tashema Bholanath, T. Dongchung, Amber Levanon Seligson, Christina C Norman, Sarah E. Dumas
{"title":"Adapting Clinical Instruments for Population Mental Health Surveillance: Should an Explicit “Don’t Know” Response Option Be Given?","authors":"Rachel Suss, Tashema Bholanath, T. Dongchung, Amber Levanon Seligson, Christina C Norman, Sarah E. Dumas","doi":"10.29115/sp-2023-0003","DOIUrl":"https://doi.org/10.29115/sp-2023-0003","url":null,"abstract":"","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43386856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-03-13DOI: 10.29115/sp-2023-0001
D. Pennay, S. Misson, D. Neiger, P. Lavrakas
{"title":"How Weighting by Past Vote Can Improve Estimates of Voting Intentions","authors":"D. Pennay, S. Misson, D. Neiger, P. Lavrakas","doi":"10.29115/sp-2023-0001","DOIUrl":"https://doi.org/10.29115/sp-2023-0001","url":null,"abstract":"Polling error for the 2020 US election was the highest in 40 years and no mode of surveying was unambiguously more accurate. This occurred amid several recent polling failures in other countries. Online panels, as the dominant method now used by pollsters to survey voters, are well-positioned to help reduce the level of bias in pre-election polls. Here, we present a case for those pollsters using online panels for pre-election polling to (re)consider using past vote choice (i.e., whom respondents voted for in the previous election) as a weighting variable capable of reducing bias in their election forecasts under the right circumstances. Our data are from an Australian pre-election poll, conducted on a probability-based online panel one month prior to the 2019 Australian federal election. Three different measures of recalled vote choice for the 2016 election were used in weighting the forecast of the 2019 election outcome. These were (1) a short-term measure of recall for the 2016 vote choice obtained three months after the 2016 election, (2) a long-term measure obtained from the same panelists three years after the 2016 election and (3) a hybrid measure with a random half of panelists allocated their short-term past vote measure for 2016 and the remainder their long-term measure. We then examined the impacts on the bias and variance of the resulting estimates of the 2019 voting intentions. Using the short-term measure of the 2016 recalled vote choice in our weighting significantly reduced the bias of the resulting 2019 voting intentions forecast, with an acceptable impact on variance, and produced less biased estimates than when using either of the other two past vote measures. The short-term recall measure also generally resulted in better estimates than a weighting approach that did not include any past vote adjustment. Implications for panel providers are discussed.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49415885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-02-23DOI: 10.29115/sp-2023-0002
D. Doherty, D. Garbarski, Pablo Guzman Rivera
{"title":"Null Effects of Framing Welcoming Ordinances","authors":"D. Doherty, D. Garbarski, Pablo Guzman Rivera","doi":"10.29115/sp-2023-0002","DOIUrl":"https://doi.org/10.29115/sp-2023-0002","url":null,"abstract":"A substantial body of published work finds that seemingly trivial changes in question wording or the way an issue is framed can substantially affect the attitudes people report on surveys. We report findings from a survey of Cook County residents where seemingly strong issue framing treatments that varied the stated purpose of welcoming ordinance provisions failed to affect reported attitudes. We find no effect in the aggregate, nor do we find effects among demographic or other seemingly relevant subgroups. The findings illustrate the important but often overlooked fact that varying how an issue is framed does not always affect reported attitudes.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45355846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-01-12DOI: 10.29115/sp-2014-0026
John Boyle, James Dayton, Randy ZuWallack, Ronaldo Iachan
{"title":"The Shy Respondent and Propensity to Participate in Surveys: A Proof-of-Concept Study","authors":"John Boyle, James Dayton, Randy ZuWallack, Ronaldo Iachan","doi":"10.29115/sp-2014-0026","DOIUrl":"https://doi.org/10.29115/sp-2014-0026","url":null,"abstract":"The Democratic presidential support among voters was overstated in 88% of national polls in 2016 and 93% in 2020. The \"shy voter\" phenomenon from British electoral politics was one explanation offered in the 2016 elections. Although a similar pattern occurred in 2020 election polls, the evidence does not support misrepresentation of voting intent by Trump supporters as the explanation. However, an alternative hypothesis of a self-selection bias against Trump voters in pre-election surveys has been proposed. Moreover, if these Trump voters were less likely to participate in surveys due to a psychosocial predisposition, then their absence might not be corrected by sample weighting based on demographics and party affiliation. This study explores whether there is a segment of the population with a personality or behavioral predisposition that makes them want to avoid polls (\"shy respondents\") and whether this affects their likelihood of survey participation and voting. As part of a national survey of motivators and barriers to survey participation, we had a proxy measure of survey shyness: \"I prefer to stay out of sight and not be counted in government surveys.\" We compare this stated predisposition to both willingness to participate in surveys and likelihood of voting. We find this \"shy respondent\" measure is related to stated willingness to participate in future surveys. Although \"shy respondents\" were less likely to vote than others, a majority \"always\" vote in presidential elections. Collectively, \"shy respondents\" who are unlikely to participate in surveys represent about 10% of \"likely voters\" in presidential elections. This survey shyness is also related to political alienation measures, which may lead to underrepresentation of more populist leaning respondents in pre-election surveys. The relationship of survey shyness to demographics is generally slight so demographic weighting is unlikely to correct for underrepresentation of \"shy respondents\" in pre-election polls.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135995233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2023-01-01Epub Date: 2023-08-17DOI: 10.29115/sp-2023-0011
Jacob Boelter, Alexis M Dennis, Lisa Klein Vogel, Kenneth D Croes
{"title":"Recruiting Hard-to-Reach Populations Amid the COVID-19 Pandemic.","authors":"Jacob Boelter, Alexis M Dennis, Lisa Klein Vogel, Kenneth D Croes","doi":"10.29115/sp-2023-0011","DOIUrl":"10.29115/sp-2023-0011","url":null,"abstract":"<p><p>The COVID-19 pandemic introduced many challenges for conducting research, particularly for research studies reliant on community-based sample generation strategies. In late 2021, we undertook two qualitative research studies for which we needed to identify and recruit hard-to-reach populations from the community. This brief describes our approach to adapting traditional, in-person methods to virtual means of disseminating study information and connecting with potential participants, with implications for future recruitment efforts in situations when in-person options are constrained.</p>","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11173354/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45339209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survey practicePub Date : 2022-11-03DOI: 10.29115/sp-2022-0011
E. Loewen, E. Bauer, M. Thompson, Nadia Martin, Anne C. K. Quah, G. Fong
{"title":"Timing estimates for complex programmed surveys","authors":"E. Loewen, E. Bauer, M. Thompson, Nadia Martin, Anne C. K. Quah, G. Fong","doi":"10.29115/sp-2022-0011","DOIUrl":"https://doi.org/10.29115/sp-2022-0011","url":null,"abstract":"","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49610535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}