{"title":"Improving the Efficiency of Outbound CATI As a Nonresponse Follow-Up Mode in Address-Based Samples: A Quasi-Experimental Evaluation of a Dynamic Adaptive Design","authors":"Michael T Jackson, Todd Hughes, Jiangzhou Fu","doi":"10.1093/jssam/smae005","DOIUrl":"https://doi.org/10.1093/jssam/smae005","url":null,"abstract":"\u0000 This article evaluates the use of dynamic adaptive design methods to target outbound computer-assisted telephone interviewing (CATI) in the California Health Interview Survey (CHIS). CHIS is a large-scale, annual study that uses an address-based sample (ABS) with push-to-Web mailings, followed by outbound CATI follow-up for addresses with appended phone numbers. CHIS 2022 implemented a dynamic adaptive design in which predictive models were used to end dialing early for some cases. For addresses that received outbound CATI follow-up, dialing was paused after three calls. A response propensity (RP) model was applied to predict the probability that the address would respond to continued dialing, based on the outcomes of the first three calls. Low-RP addresses were permanently retired with no additional dialing, while the rest continued through six or more attempts. We use a difference-in-difference design to evaluate the effect of the adaptive design on calling effort, completion rates, and the demographic composition of respondents. We find that the adaptive design reduced the mean number of calls per sampled unit by about 14 percent (relative to a modeled no-adaptive-design counterfactual) with a minimal reduction in the completion rate and no strong evidence of changes in the prevalence of target demographics. This suggests that RP modeling can meaningfully distinguish between ABS sample units for which additional dialing is and is not productive, helping to control outbound dialing costs without compromising sample representativeness.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140264126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Prevalence and Nature of Cognitive Interviewing as a Survey Questionnaire Evaluation Method in the United States","authors":"Andrew Caporaso, Stanley Presser","doi":"10.1093/jssam/smad047","DOIUrl":"https://doi.org/10.1093/jssam/smad047","url":null,"abstract":"\u0000 We describe the prevalence and nature of cognitive interviewing (CI) for testing survey questionnaires in the United States and compare our results to those from Blair and Presser’s similar study of three decades ago when such testing was relatively new. We find that although CI is now much more common than in 1993, there are still many organizations that do not use it. In addition, we find that there has been only a modest reduction in the great variation of ways CI is conducted both within and across organizations. We interpret this variability mainly as a reflection of the lack of consensus about best practices and call for research that will make consensus about best practices more likely.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139528746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interviewer Ratings of Physical Appearance in a Large-Scale Survey in China","authors":"Qiong Wu, Yu Xie","doi":"10.1093/jssam/smad046","DOIUrl":"https://doi.org/10.1093/jssam/smad046","url":null,"abstract":"\u0000 Interviewer ratings of respondents’ physical appearance have been collected in several major social surveys. While researchers have made good use of such ratings data in substantive studies, empirical evidence on their measurement properties is rather limited. This study evaluates two potential threats to the quality of interviewer ratings of physical appearance: interviewer effects and halo effects. Using data from the China Family Panel Studies, we show large interviewer effects on interviewer ratings of respondents’ physical appearance based on cross-classified models. We also provide possible evidence for halo effects based on high correlations between physical appearance ratings and other theoretically distinct constructs, after controlling for interviewer effects. However, we find support for convergent and discriminant validity of physical appearance ratings when both interviewer effects and halo effects are controlled for. Empirical studies using interviewer observation data should take into account interviewer effects and halo effects when possible or at least discuss their potential impact on the substantive findings.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139527667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Small Area Poverty Estimation under Heteroskedasticity","authors":"Sumonkanti Das, Ray Chambers","doi":"10.1093/jssam/smad045","DOIUrl":"https://doi.org/10.1093/jssam/smad045","url":null,"abstract":"\u0000 Multilevel models with nested errors are widely used in poverty estimation. An important application in this context is estimating the distribution of poverty as defined by the distribution of income within a set of domains that cover the population of interest. Since unit-level values of income are usually heteroskedastic, the standard homoskedasticity assumptions implicit in popular multilevel models may not be appropriate and can lead to bias, particularly when used to estimate domain-specific income distributions. This article addresses this problem when the income values in the population of interest can be characterized by a two-level mixed linear model with independent and identically distributed domain effects and with independent but not identically distributed individual effects. Estimation of poverty indicators that are functionals of domain-level income distributions is also addressed, and a nonparametric bootstrap procedure is used to estimate mean squared errors and confidence intervals. The proposed methodology is compared with the well-known World Bank poverty mapping methodology for this situation, using model-based simulation experiments as well as an empirical study based on Bangladesh poverty data.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139441260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating Respondent Attention to Experimental Text Lengths","authors":"Tobias Rettig, A. Blom","doi":"10.1093/jssam/smad044","DOIUrl":"https://doi.org/10.1093/jssam/smad044","url":null,"abstract":"\u0000 Whether respondents pay adequate attention to a questionnaire has long been of concern to survey researchers. In this study, we measure respondents’ attention with an instruction manipulation check. We investigate which respondents read question texts of experimentally varied lengths and which become inattentive in a probability-based online panel of the German population. We find that respondent attention is closely linked to text length. Individual response speed is strongly correlated with respondent attention, but a fixed cutoff time is unsuitable as a standalone attention indicator. Differing levels of attention are also associated with respondents’ age, gender, education, panel experience, and the device used to complete the survey. Removal of inattentive respondents is thus likely to result in a biased remaining sample. Instead, questions should be curtailed to encourage respondents of different backgrounds and abilities to read them attentively and provide optimized answers.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139385171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Catch-22—the Test–Retest Method of Reliability Estimation","authors":"Paula A. Tufiș, D. Alwin, Daniel N Ramírez","doi":"10.1093/jssam/smad043","DOIUrl":"https://doi.org/10.1093/jssam/smad043","url":null,"abstract":"\u0000 This article addresses the problems with the traditional reinterview approach to estimating the reliability of survey measures. Using data from three reinterview (or panel) studies conducted by the General Social Survey, we investigate the differences between the two-wave correlational approach embodied by the traditional reinterview strategy, compared to estimates of reliability that take the stability of traits into account based on a three-wave model. Our results indicate that the problems identified with the two-wave correlational approach reflect a kind of “Catch-22” in the sense that the only solution to the problem is denied by the approach itself. Specifically, we show that the correctly specified two-wave model, which includes the potential for true change in the latent variable, is underidentified, and thus, unless one is willing to make some potentially risky assumptions, reliability parameters are not estimable. This article compares the two-wave correlational approach to an alternative model for estimating reliability, Heise’s estimates based on the three-wave simplex model. Using three waves of data from the GSS panels, which were separated by 2-year intervals between waves, this article examines the conditions under which the wave-1, wave-2 correlations which do not take stability into account approximate the reliability estimate obtained from three-wave simplex models that do take stability into account. The results lead to the conclusion that the differences between estimates depend on the stability and/or fixed nature of the underlying processes involved. Few if any differences are identified when traits are fixed or highly stable, but for traits involving changes in the underlying traits the differences can be quite large, and thus, we argue for the superiority of reinterview designs that involve more than 2 waves in the estimation of reliability parameters.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138955719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Poverty Mapping Under Area-Level Random Regression Coefficient Poisson Models","authors":"Naomi Diz-Rosales, M. Lombardía, Domingo Morales","doi":"10.1093/jssam/smad036","DOIUrl":"https://doi.org/10.1093/jssam/smad036","url":null,"abstract":"Under an area-level random regression coefficient Poisson model, this article derives small area predictors of counts and proportions and introduces bootstrap estimators of the mean squared errors (MSEs). The maximum likelihood estimators of the model parameters and the mode predictors of the random effects are calculated by a Laplace approximation algorithm. Simulation experiments are implemented to investigate the behavior of the fitting algorithm, the predictors, and the MSE estimators with and without bias correction. The new statistical methodology is applied to data from the Spanish Living Conditions Survey. The target is to estimate the proportions of women and men under the poverty line by province.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139214258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ipek Bilgen, David Dutwin, Roopam Singh, Erlina Hendarwan
{"title":"Peekaboo! The Effect of Different Visible Cash Display and Amount Options During Mail Contact When Recruiting to a Probability-Based Panel","authors":"Ipek Bilgen, David Dutwin, Roopam Singh, Erlina Hendarwan","doi":"10.1093/jssam/smad039","DOIUrl":"https://doi.org/10.1093/jssam/smad039","url":null,"abstract":"Abstract Recent studies consistently showed that making cash visible with a windowed envelope during mail contact increases response rates in surveys. The visible cash aims to pique interest and encourage sampled households to open the envelope. This article extends prior research by examining the effect of additional interventions implemented during mail recruitment to a survey panel on recruitment rates and costs. Specifically, we implemented randomized experiments to examine size (small, large) and location (none, front, back) of the window displaying cash, combined with what part of the cash is shown through the window envelope (numeric amount, face/image), and various prepaid incentive amounts (two $1, one $2, one $5). We used the recruitment effort for NORC’s AmeriSpeak Panel as the data source for this study. The probability-based AmeriSpeak Panel uses an address-based sample and multiple modes of respondent contact, including mail, phone, and in-person outreach during recruitment. Our results were consistent with prior research and showed significant improvement in recruitment rates when cash was displayed through a window during mail contact. We also found that placing the window on the front of the envelope, showing $5 through the envelope compared to $2 and $1, and showing the tender amount compared to the image on the cash through the window were more likely to improve the recruitment rates. Our cost analyses illustrated that the cost difference in printing window versus no window envelope is small. There is no difference in printing cost between front window and back window as they both require custom manufacturing. There is also no cost difference in printing envelopes with small windows versus large windows. Lastly, we found no evidence of mail theft based on our review of the United States Postal Service’s “track and trace” reports, seed mailings sent to staff, and undeliverable mailing rates.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135292587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to: Correcting Selection Bias in Big Data by Pseudo-Weighting","authors":"","doi":"10.1093/jssam/smad042","DOIUrl":"https://doi.org/10.1093/jssam/smad042","url":null,"abstract":"","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135292588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katherine A McGonagle, Narayan Sastry, Vicki A Freedman
{"title":"THE EFFECTS OF A TARGETED \"EARLY BIRD\" INCENTIVE STRATEGY ON RESPONSE RATES, FIELDWORK EFFORT, AND COSTS IN A NATIONAL PANEL STUDY.","authors":"Katherine A McGonagle, Narayan Sastry, Vicki A Freedman","doi":"10.1093/jssam/smab042","DOIUrl":"10.1093/jssam/smab042","url":null,"abstract":"<p><p>Adaptive survey designs are increasingly used by survey practitioners to counteract ongoing declines in household survey response rates and manage rising fieldwork costs. This paper reports findings from an evaluation of an early-bird incentive (EBI) experiment targeting high-effort respondents who participate in the 2019 wave of the US Panel Study of Income Dynamics. We identified a subgroup of high-effort respondents at risk of nonresponse based on their prior wave fieldwork effort and randomized them to a treatment offering an extra time-delimited monetary incentive for completing their interview within the first month of data collection (treatment group; <i>N</i> = 800) or the standard study incentive (control group; <i>N</i> = 400). In recent waves, we have found that the costs of the protracted fieldwork needed to complete interviews with high-effort cases in the form of interviewer contact attempts plus an increased incentive near the close of data collection are extremely high. By incentivizing early participation and reducing the number of interviewer contact attempts and fieldwork days to complete the interview, our goal was to manage both nonresponse and survey costs. We found that the EBI treatment increased response rates and reduced fieldwork effort and costs compared to a control group. We review several key findings and limitations, discuss their implications, and identify the next steps for future research.</p>","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10702785/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138801468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}