{"title":"Assessing and Relaxing Assumptions in Quasi-Simplex Models","authors":"A. Cernat, P. Lugtig, S. N. Uhrig, N. Watson","doi":"10.1093/oso/9780198859987.003.0007","DOIUrl":"https://doi.org/10.1093/oso/9780198859987.003.0007","url":null,"abstract":"The quasi-simplex model (QSM) makes use of at least three repeated measures of the same variable to estimate reliability. The model has rather strict assumptions and ignoring them may bias estimates of reliability. While some previous studies have outlined how several of its assumptions can be relaxed, they have not been exhaustive and systematic. Thus, it is unclear what all the assumptions are and how to test and free them in practice. This chapter will addresses this situation by presenting the main assumptions of the quasi-simplex model and the ways in which users can relax these with relative ease when more than three waves are available. Additionally, by using data from the British Household Panel Survey we show how this is practically done and highlight the potential biases found when ignoring the violations of the assumptions. We conclude that relaxing the assumptions should be implemented routinely when more than three waves of data are available.","PeriodicalId":231734,"journal":{"name":"Measurement Error in Longitudinal Data","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115281815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Biemer, K. Harris, Dan Liao, B. Burke, C. Halpern
{"title":"Modelling Mode Effects for a Panel Survey in Transition","authors":"P. Biemer, K. Harris, Dan Liao, B. Burke, C. Halpern","doi":"10.1093/oso/9780198859987.003.0004","DOIUrl":"https://doi.org/10.1093/oso/9780198859987.003.0004","url":null,"abstract":"Funding reductions combined with increasing data-collection costs required that Wave V of the USA’s National Longitudinal Study of Adolescent to Adult Health (Add Health) abandon its traditional approach of in-person interviewing and adopt a more cost-effective method. This approach used the mail/web mode in Phase 1 of data collection and in-person interviewing for a random sample of nonrespondents in Phase 2. In addition, to facilitate the comparison of modes, a small random subsample served as the control and received the traditional in-person interview. We show that concerns about reduced data quality as a result of the redesign effort were unfounded based on findings from an analysis of the survey data. In several important respects, the new two-phase, mixed-mode design outperformed the traditional design with greater measurement accuracy, improved weighting adjustments for mitigating the risk of nonresponse bias, reduced residual (or post-adjustment) nonresponse bias, and substantially reduced total-mean-squared error of the estimates. This good news was largely unexpected based upon the preponderance of literature suggesting data quality could be adversely affected by the transition to a mixed mode. The bad news is that the transition comes with a high risk of mode effects for comparing Wave V and prior wave estimates. Analytical results suggest that significant differences can occur in longitudinal change estimates about 60 % of the time purely as an artifact of the redesign. This begs the question: how, then, should a data analyst interpret significant findings in a longitudinal analysis in the presence of mode effects? This chapter presents the analytical results and attempts to address this question.","PeriodicalId":231734,"journal":{"name":"Measurement Error in Longitudinal Data","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123678284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measurement Invariance with Ordered Categorical Variables","authors":"T. Gerosa","doi":"10.1093/oso/9780198859987.003.0011","DOIUrl":"https://doi.org/10.1093/oso/9780198859987.003.0011","url":null,"abstract":"Multi-item ordered categorical scales and structural equation modelling approaches are often used in panel research for the analysis of latent variables over time. The accuracy of such models depends on the assumption of longitudinal measurement invariance (LMI), which states that repeatedly measured latent variables should effectively represent the same construct in the same metric at each time point. Previous research has widely contributed to the LMI literature for continuous variables, but these findings might not be generalized to ordered categorical data. Treating ordered categorical data as continuous contradicts the assumption of multivariate normality and could potentially produce inaccuracies and distortions in both invariance testing results and structural parameter estimates. However, there is still little research that examines and compares criteria for establishing LMI with ordinal categorical data. Drawing on this lack of evidence, the present chapter offers a detailed description of the main procedures used to test for LMI with ordered categorical variables, accompanied by examples of their practical application in a two-wave longitudinal survey administered to 1,912 Italian middle school teachers. The empirical study evaluates whether different testing procedures, when applied to ordered categorical data, lead to similar conclusions about model fit, invariance, and structural parameters over time.","PeriodicalId":231734,"journal":{"name":"Measurement Error in Longitudinal Data","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126887347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-Evaluation, Differential Item Functioning, and Longitudinal Anchoring Vignettes","authors":"O. Paccagnella","doi":"10.1093/oso/9780198859987.003.0012","DOIUrl":"https://doi.org/10.1093/oso/9780198859987.003.0012","url":null,"abstract":"Anchoring vignettes are a powerful instrument to detect systematic differences in the use of self-reported ordinal survey responses. Not taking into account the (non-random) heterogeneity in reporting styles across different respondents may systematically bias the measurement of the variables of interest. The presence of such individual heterogeneity leads respondents to interpret, understand, or use the response categories for the same question differently. This phenomenon is defined as differential item functioning (DIF) in the psychometric literature. A growing amount of cross-sectional studies apply the anchoring vignette approach to tackle this issue but its use is still limited in the longitudinal context. This chapter introduces longitudinal anchoring vignettes for DIF correction, as well as the statistical approaches available when working with such data and how to investigate stability over time of individual response scales.","PeriodicalId":231734,"journal":{"name":"Measurement Error in Longitudinal Data","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115175144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}