{"title":"Internet Panels, Professional Respondents, and Data Quality","authors":"S. Matthijsse, E. D. Leeuw, J. Hox","doi":"10.1027/1614-2241/A000094","DOIUrl":"https://doi.org/10.1027/1614-2241/A000094","url":null,"abstract":"Abstract. Most web surveys collect data through nonprobability or opt-in online panels, which are characterized by self-selection. A concern in online research is the emergence of professional respondents, who frequently participate in surveys and are mainly doing so for the incentives. This study investigates if professional respondents can be distinguished in online panels and if they provide lower quality data than nonprofessionals. We analyzed a data set of the NOPVO (Netherlands Online Panel Comparison) study that includes 19 panels, which together capture 90% of the respondents in online market research in the Netherlands. Latent class analysis showed that four types of respondents can be distinguished, ranging from the professional respondent to the altruistic respondent. A profile of professional respondents is depicted. Professional respondents appear not to be a great threat to data quality.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"11 1","pages":"81-88"},"PeriodicalIF":3.1,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing Model Fit in Latent Class Analysis When Asymptotics Do Not Hold","authors":"Geert H. van Kollenburg, J. Mulder, J. Vermunt","doi":"10.1027/1614-2241/A000093","DOIUrl":"https://doi.org/10.1027/1614-2241/A000093","url":null,"abstract":"The application of latent class (LC) analysis involves evaluating the LC model using goodness-of-fit statistics. To assess the misfit of a specified model, say with the Pearson chi-squared statistic, a p-value can be obtained using an asymptotic reference distribution. However, asymptotic p-values are not valid when the sample size is not large and/or the analyzed contingency table is sparse. Another problem is that for various other conceivable global and local fit measures, asymptotic distributions are not readily available. An alternative way to obtain the p-value for the statistic of interest is by constructing its empirical reference distribution using resampling techniques such as the parametric bootstrap or the posterior predictive check (PPC). In the current paper, we show how to apply the parametric bootstrap and two versions of the PPC to obtain empirical p-values for a number of commonly used global and local fit statistics within the context of LC analysis. The main difference between the PPC ...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"11 1","pages":"65-79"},"PeriodicalIF":3.1,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Bell, G. Morgan, J. Schoeneberger, J. Kromrey, J. Ferron
{"title":"How Low Can You Go? An Investigation of the Influence of Sample Size and Model Complexity on Point and Interval Estimates in Two-Level Linear Models","authors":"B. Bell, G. Morgan, J. Schoeneberger, J. Kromrey, J. Ferron","doi":"10.1027/1614-2241/A000062","DOIUrl":"https://doi.org/10.1027/1614-2241/A000062","url":null,"abstract":"Whereas general sample size guidelines have been suggested when estimating multilevel models, they are only generalizable to a relatively limited number of data conditions and model structures, both of which are not very feasible for the applied researcher. In an effort to expand our understanding of two-level multilevel models under less than ideal conditions, Monte Carlo methods, through SAS/IML, were used to examine model convergence rates, parameter point estimates (statistical bias), parameter interval estimates (confidence interval accuracy and precision), and both Type I error control and statistical power of tests associated with the fixed effects from linear two-level models estimated with PROC MIXED. These outcomes were analyzed as a function of: (a) level-1 sample size, (b) level-2 sample size, (c) intercept variance, (d) slope variance, (e) collinearity, and (f) model complexity. Bias was minimal across nearly all conditions simulated. The 95% confidence interval coverage and Type I error rate tended to be slightly conservative. The degree of statistical power was related to sample sizes and level of fixed effects; higher power was observed with larger sample sizes and level-1 fixed effects. Hierarchically organized data are commonplace in educa- tional, clinical, and other settings in which research often occurs. Students are nested within classrooms or teachers, and teachers are nested within schools. Alternatively, service recipients are nested within social workers providing ser- vices, who may in turn be nested within local civil service entities. Conducting research at any of these levels while ignoring the more detailed levels (students) or contextual levels (schools) can lead to erroneous conclusions. As such, multilevel models have been developed to properly account","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"84 1","pages":"1-11"},"PeriodicalIF":3.1,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Impact of Using Incorrect Weights With the Multiple Membership Random Effects Model","authors":"L. Smith, S. N. Beretvas","doi":"10.1027/1614-2241/A000066","DOIUrl":"https://doi.org/10.1027/1614-2241/A000066","url":null,"abstract":"The multiple membership random effects model (MMREM) is used to appropriately model multiple membership data structures. Use of the MMREM requires selection of weights reflecting the hypothesized contribution of each level two unit (e.g., school) and their descriptors to the level one outcome. This study assessed the impact on MMREM parameter and residual estimates of the choice of weight pattern used. Parameter and residual estimates resulting from use of different weight patterns were compared using a real dataset and a small-scale simulation study. Under the conditions examined here, results indicated that choice of weight pattern did not greatly impact relative parameter bias nor level two residuals’ ranks. Limitations and directions for future research are discussed.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"10 1","pages":"31-42"},"PeriodicalIF":3.1,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sample Size Requirements of the Robust Weighted Least Squares Estimator","authors":"Morten Moshagen, J. Musch","doi":"10.1027/1614-2241/A000068","DOIUrl":"https://doi.org/10.1027/1614-2241/A000068","url":null,"abstract":"The present study investigated sample size requirements of maximum likelihood (ML) and robust weighted least squares (robust WLS) estimation for ordinal data with confirmatory factor analysis (CFA) models with 3-10 indicators per factor, primary loadings between .4 and .9, and four different levels of categorization (2, 3, 5, and 7). Additionally, the utility of the H-measure of construct reliability (an index combining the number of indicators and the magnitude of loadings) in predicting sample size requirements was examined. Results indicated that a higher number of indicators per factors and higher factor loadings increased the rates of proper convergence and solution propriety. However, the H-measure could only partly account for the results. Moreover, it was demonstrated that robust WLS was mostly superior to ML, suggesting that there is little reason to prefer ML over robust WLS when the data are ordinal. Sample size recommendations for the robust WLS estimator are provided. Confirmatory factor analysis (CFA), as a special case of structural equation models, is a powerful technique to model and test relationships between manifest variables and latent constructs. Estimation of CFA models usually proceeds using normal-theory estimators with the most commonly used being maximum likelihood (ML). Nor- mal-theory estimation methods assume continuous and multivariate normally distributed observed variables; how- ever, many measures in the social and behavioral sciences are characterized by a dichotomous or an ordinal level of measurement. Although the items of a test or a question- naire are conceived to be measures of a theoretically contin- uous construct, the observed responses are discrete realizations of a small number of categories and, thus, lack the scale and distributional properties assumed by normal- theory estimators.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"421 1","pages":"60-70"},"PeriodicalIF":3.1,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Hierarchical Bayesian Model With Correlated Residuals for Investigating Stability and Change in Intensive Longitudinal Data Settings","authors":"F. Gasimova, A. Robitzsch, O. Wilhelm, G. Hülür","doi":"10.1027/1614-2241/A000083","DOIUrl":"https://doi.org/10.1027/1614-2241/A000083","url":null,"abstract":"The present paper’s focus is the modeling of interindividual and intraindividual variability in longitudinal data. We propose a hierarchical Bayesian model with correlated residuals, employing an autoregressive parameter AR(1) for focusing on intraindividual variability. The hierarchical model possesses four individual random effects: intercept, slope, variability, and autocorrelation. The performance of the proposed Bayesian estimation is investigated in simulated longitudinal data with three different sample sizes (N = 100, 200, 500) and three different numbers of measurement points (T = 10, 20, 40). The initial simulation values are selected according to the results of the first 20 measurement occasions from a longitudinal study on working memory capacity in 9th graders. Within this simulation study, we investigate the root mean square error (RMSE), bias, relative percentage bias, and the 90% coverage probability of parameter estimates. Results indicate that more accurate estimates are associated with ...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"10 1","pages":"126-137"},"PeriodicalIF":3.1,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discordancy Tests for Outlier Detection in Multi-Item Questionnaires","authors":"W. Zijlstra, L. V. D. Ark, K. Sijtsma","doi":"10.1027/1614-2241/A000056","DOIUrl":"https://doi.org/10.1027/1614-2241/A000056","url":null,"abstract":"The sensitivity and the specificity of four outlier scores were studied for four different discordancy tests. The outlier scores were the Mahalanobis distance, a robust version of the Mahalanobis distance, and two measures tailored to discrete data, known as O+ and G+. The discordancy tests were Tukey’s fences (a.k.a. boxplot). Tukey’s fences with adjustment for skewness (adjusted boxplot), the generalized extreme studentized deviate (ESD), and the transformed ESD (ESD-T). Outlier scores O+ and G+ performed better than the Mahalanobis distance and its robust version. Discordancy tests ESD-T and adjusted boxplot were advocated for high specificity and ESD for high sensitivity.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"69-77"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. McAssey, J. Helm, F. Hsieh, D. Sbarra, E. Ferrer
{"title":"Methodological Advances for Detecting Physiological Synchrony During Dyadic Interactions","authors":"M. McAssey, J. Helm, F. Hsieh, D. Sbarra, E. Ferrer","doi":"10.1027/1614-2241/A000053","DOIUrl":"https://doi.org/10.1027/1614-2241/A000053","url":null,"abstract":"A defining feature of many physiological systems is their synchrony and reciprocal influence. An important challenge, however, is how to measure such features. This paper presents two new approaches for identifying synchrony between the physiological signals of individuals in dyads. The approaches are adaptations of two recently-developed techniques, depending on the nature of the physiological time series. For respiration and thoracic impedance, signals that are measured continuously, we use Empirical Mode Decomposition to extract the low-frequency components of a nonstationary signal, which carry the signal’s trend. We then compute the maximum cross-correlation between the trends of two signals within consecutive overlapping time windows of fixed width throughout each of a number of experimental tasks, and identify the proportion of large values of this measure occurring during each task. For heart rate, which is output discretely, we use a structural linear model that takes into account heteroscedastic...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"41-53"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Blanca, J. Arnau, Dolores López-Montiel, Roser Bono, R. Bendayan
{"title":"Skewness and Kurtosis in Real Data Samples","authors":"M. Blanca, J. Arnau, Dolores López-Montiel, Roser Bono, R. Bendayan","doi":"10.1027/1614-2241/A000057","DOIUrl":"https://doi.org/10.1027/1614-2241/A000057","url":null,"abstract":"Parametric statistics are based on the assumption of normality. Recent findings suggest that Type I error and power can be adversely affected when data are non-normal. This paper aims to assess the distributional shape of real data by examining the values of the third and fourth central moments as a measurement of skewness and kurtosis in small samples. The analysis concerned 693 distributions with a sample size ranging from 10 to 30. Measures of cognitive ability and of other psychological variables were included. The results showed that skewness ranged between −2.49 and 2.33. The values of kurtosis ranged between −1.92 and 7.41. Considering skewness and kurtosis together the results indicated that only 5.5% of distributions were close to expected values under normality. Although extreme contamination does not seem to be very frequent, the findings are consistent with previous research suggesting that normality is not the rule with real data.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"78-84"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1027/1614-2241/A000057","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A General Linear Framework for Modeling Continuous Responses With Error in Persons and Items","authors":"P. J. Ferrando","doi":"10.1027/1614-2241/A000060","DOIUrl":"https://doi.org/10.1027/1614-2241/A000060","url":null,"abstract":"This study develops a general linear model intended for personality and attitude items with (approximately) continuous responses that is based on a double source of measurement error: items and persons. Two restricted sub-models are then obtained from the general model by placing restrictions on the item and person parameters. And it follows that the standard unidimensional factor-analytic model is one of these sub-models. Procedures for (a) calibrating the items, (b) obtaining individual estimates of location and fluctuation, (c) assessing model-data fit, and (d) assessing measurement precision are discussed for all the models considered, and illustrated with two empirical examples in the personality domain.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"150-161"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}