{"title":"The Impact of the Number of Dyads on Estimation of Dyadic Data Analysis Using Multilevel Modeling","authors":"H. Du, Lijuan Wang","doi":"10.1027/1614-2241/A000105","DOIUrl":"https://doi.org/10.1027/1614-2241/A000105","url":null,"abstract":"Abstract. Dyadic data often appear in social and behavioral research, and multilevel models (MLMs) can be used to analyze them. For dyadic data, the group size is 2, which is the minimum group size we could have for fitting a multilevel model. This Monte Carlo study examines the effects of the number of dyads, the intraclass correlation (ICC), the proportion of singletons, and the missingness mechanism on convergence, bias, coverage rates, and Type I error rates of parameter estimates of dyadic data analysis using MLMs. Results showed that the estimation of variance components could have nonconvergence problems, nonignorable bias, and deviated coverage rates from nominal values when ICC is low, the proportion of singletons is high, and/or the number of dyads is small. More dyads helped obtain more reliable and valid estimates. Sample size guidelines based on the simulation model are given and discussed.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana W Capuano, Jeffrey D Dawson, Marizen R Ramirez, Robert S Wilson, Lisa L Barnes, R William Fields
{"title":"Modeling Likert scale outcomes with trend-proportional odds with and without cluster data.","authors":"Ana W Capuano, Jeffrey D Dawson, Marizen R Ramirez, Robert S Wilson, Lisa L Barnes, R William Fields","doi":"10.1027/1614-2241/a000106","DOIUrl":"https://doi.org/10.1027/1614-2241/a000106","url":null,"abstract":"<p><p>Likert scales are commonly used in epidemiological studies employing surveys. In this tutorial we demonstrate how the proportional odds model and the trend odds model can be applied simultaneously to data measured in Likert scales, allowing for random cluster effects. We use two datasets as examples: an epidemiological study on aging and cognition among community-dwelling Black persons, and a clustered large survey data from 28,882 students in 81 middle schools. The first example models the Likert outcome from the question: \"People act as if they think you are dishonest\". The trend-proportional odds model indicates that Black men have higher odds than Black women of reporting being perceived dishonest. The second example models the Likert outcome from the question: \"How often have you been beaten up at school?\". The trend-proportional odds model indicates that children with disability have a higher odds of severe violence than other children. For both examples, the cumulative odds ratio increases by more than 60% at the higher Likert levels.</p>","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10426790/pdf/nihms-1914107.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10068888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Methodological Challenges of Mixed Methods Intervention Evaluations","authors":"H. Boeije, S. Drabble, A. O’Cathain","doi":"10.1027/1614-2241/A000101","DOIUrl":"https://doi.org/10.1027/1614-2241/A000101","url":null,"abstract":"Abstract. This paper addresses the methodological challenges that accompany the use of a combination of research methods to evaluate complex interventions. In evaluating complex interventions, the question about effectiveness is not the only question that needs to be answered. Of equal interest are questions about acceptability, feasibility, and implementation of the intervention and the evaluation study itself. Using qualitative research in conjunction with trials enables us to address this diversity of questions. The combination of methods results in a mixed methods intervention evaluation (MMIE). In this article we demonstrate the relevance of mixed methods evaluation studies and provide case studies from health care. Methodological challenges that need our attention are, among others, choosing appropriate designs for MMIEs, determining realistic expectations of both components, and assigning adequate resources to both components. Solving these methodological issues will improve our research designs an...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2015-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Methodological Issues in Categorical Data Analysis","authors":"J. Hagenaars","doi":"10.1027/1614-2241/A000102","DOIUrl":"https://doi.org/10.1027/1614-2241/A000102","url":null,"abstract":"Abstract. The “General Linear Reality” view of the social world endorsed by analysis models assuming (underlying) continuous variables that are normally distributed is still prevailing in most of s...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2015-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Internet Panels, Professional Respondents, and Data Quality","authors":"S. Matthijsse, E. D. Leeuw, J. Hox","doi":"10.1027/1614-2241/A000094","DOIUrl":"https://doi.org/10.1027/1614-2241/A000094","url":null,"abstract":"Abstract. Most web surveys collect data through nonprobability or opt-in online panels, which are characterized by self-selection. A concern in online research is the emergence of professional respondents, who frequently participate in surveys and are mainly doing so for the incentives. This study investigates if professional respondents can be distinguished in online panels and if they provide lower quality data than nonprofessionals. We analyzed a data set of the NOPVO (Netherlands Online Panel Comparison) study that includes 19 panels, which together capture 90% of the respondents in online market research in the Netherlands. Latent class analysis showed that four types of respondents can be distinguished, ranging from the professional respondent to the altruistic respondent. A profile of professional respondents is depicted. Professional respondents appear not to be a great threat to data quality.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing Model Fit in Latent Class Analysis When Asymptotics Do Not Hold","authors":"Geert H. van Kollenburg, J. Mulder, J. Vermunt","doi":"10.1027/1614-2241/A000093","DOIUrl":"https://doi.org/10.1027/1614-2241/A000093","url":null,"abstract":"The application of latent class (LC) analysis involves evaluating the LC model using goodness-of-fit statistics. To assess the misfit of a specified model, say with the Pearson chi-squared statistic, a p-value can be obtained using an asymptotic reference distribution. However, asymptotic p-values are not valid when the sample size is not large and/or the analyzed contingency table is sparse. Another problem is that for various other conceivable global and local fit measures, asymptotic distributions are not readily available. An alternative way to obtain the p-value for the statistic of interest is by constructing its empirical reference distribution using resampling techniques such as the parametric bootstrap or the posterior predictive check (PPC). In the current paper, we show how to apply the parametric bootstrap and two versions of the PPC to obtain empirical p-values for a number of commonly used global and local fit statistics within the context of LC analysis. The main difference between the PPC ...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Bell, G. Morgan, J. Schoeneberger, J. Kromrey, J. Ferron
{"title":"How Low Can You Go? An Investigation of the Influence of Sample Size and Model Complexity on Point and Interval Estimates in Two-Level Linear Models","authors":"B. Bell, G. Morgan, J. Schoeneberger, J. Kromrey, J. Ferron","doi":"10.1027/1614-2241/A000062","DOIUrl":"https://doi.org/10.1027/1614-2241/A000062","url":null,"abstract":"Whereas general sample size guidelines have been suggested when estimating multilevel models, they are only generalizable to a relatively limited number of data conditions and model structures, both of which are not very feasible for the applied researcher. In an effort to expand our understanding of two-level multilevel models under less than ideal conditions, Monte Carlo methods, through SAS/IML, were used to examine model convergence rates, parameter point estimates (statistical bias), parameter interval estimates (confidence interval accuracy and precision), and both Type I error control and statistical power of tests associated with the fixed effects from linear two-level models estimated with PROC MIXED. These outcomes were analyzed as a function of: (a) level-1 sample size, (b) level-2 sample size, (c) intercept variance, (d) slope variance, (e) collinearity, and (f) model complexity. Bias was minimal across nearly all conditions simulated. The 95% confidence interval coverage and Type I error rate tended to be slightly conservative. The degree of statistical power was related to sample sizes and level of fixed effects; higher power was observed with larger sample sizes and level-1 fixed effects. Hierarchically organized data are commonplace in educa- tional, clinical, and other settings in which research often occurs. Students are nested within classrooms or teachers, and teachers are nested within schools. Alternatively, service recipients are nested within social workers providing ser- vices, who may in turn be nested within local civil service entities. Conducting research at any of these levels while ignoring the more detailed levels (students) or contextual levels (schools) can lead to erroneous conclusions. As such, multilevel models have been developed to properly account","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Validity Concerns with Multiplying Ordinal Items Defined by Binned Counts: An Application to a Quantity-Frequency Measure of Alcohol Use.","authors":"James S McGinley, Patrick J Curran","doi":"10.1027/1614-2241/a000081","DOIUrl":"https://doi.org/10.1027/1614-2241/a000081","url":null,"abstract":"<p><p>Social and behavioral scientists often measure constructs that are truly discrete counts by collapsing (or binning) the counts into a smaller number of ordinal responses. While prior quantitative research has identified a series of concerns with similar binning procedures, there has been a lack of study on the consequences of multiplying these ordinal items to create a desired index. This measurement strategy is incorporated in many research applications, but it is particularly salient in the study of substance use where the product of ordinal quantity (number of drinks) and frequency (number of days) items is used to create an index of total consumption. In the current study, we demonstrate both analytically and empirically that this multiplicative procedure can introduce serious threats to construct validity. These threats, in turn, directly impact the ability to accurately measure alcohol consumption.</p>","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4217492/pdf/nihms548509.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32804715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Impact of Using Incorrect Weights With the Multiple Membership Random Effects Model","authors":"L. Smith, S. N. Beretvas","doi":"10.1027/1614-2241/A000066","DOIUrl":"https://doi.org/10.1027/1614-2241/A000066","url":null,"abstract":"The multiple membership random effects model (MMREM) is used to appropriately model multiple membership data structures. Use of the MMREM requires selection of weights reflecting the hypothesized contribution of each level two unit (e.g., school) and their descriptors to the level one outcome. This study assessed the impact on MMREM parameter and residual estimates of the choice of weight pattern used. Parameter and residual estimates resulting from use of different weight patterns were compared using a real dataset and a small-scale simulation study. Under the conditions examined here, results indicated that choice of weight pattern did not greatly impact relative parameter bias nor level two residuals’ ranks. Limitations and directions for future research are discussed.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sample Size Requirements of the Robust Weighted Least Squares Estimator","authors":"Morten Moshagen, J. Musch","doi":"10.1027/1614-2241/A000068","DOIUrl":"https://doi.org/10.1027/1614-2241/A000068","url":null,"abstract":"The present study investigated sample size requirements of maximum likelihood (ML) and robust weighted least squares (robust WLS) estimation for ordinal data with confirmatory factor analysis (CFA) models with 3-10 indicators per factor, primary loadings between .4 and .9, and four different levels of categorization (2, 3, 5, and 7). Additionally, the utility of the H-measure of construct reliability (an index combining the number of indicators and the magnitude of loadings) in predicting sample size requirements was examined. Results indicated that a higher number of indicators per factors and higher factor loadings increased the rates of proper convergence and solution propriety. However, the H-measure could only partly account for the results. Moreover, it was demonstrated that robust WLS was mostly superior to ML, suggesting that there is little reason to prefer ML over robust WLS when the data are ordinal. Sample size recommendations for the robust WLS estimator are provided. Confirmatory factor analysis (CFA), as a special case of structural equation models, is a powerful technique to model and test relationships between manifest variables and latent constructs. Estimation of CFA models usually proceeds using normal-theory estimators with the most commonly used being maximum likelihood (ML). Nor- mal-theory estimation methods assume continuous and multivariate normally distributed observed variables; how- ever, many measures in the social and behavioral sciences are characterized by a dichotomous or an ordinal level of measurement. Although the items of a test or a question- naire are conceived to be measures of a theoretically contin- uous construct, the observed responses are discrete realizations of a small number of categories and, thus, lack the scale and distributional properties assumed by normal- theory estimators.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}