C. Mara, R. Cribbie, D. Flora, Cathy Labrish, Laura Mills, L. Fiksenbaum
{"title":"An Improved Model for Evaluating Change in Randomized Pretest, Posttest, Follow-Up Designs","authors":"C. Mara, R. Cribbie, D. Flora, Cathy Labrish, Laura Mills, L. Fiksenbaum","doi":"10.1027/1614-2241/A000041","DOIUrl":"https://doi.org/10.1027/1614-2241/A000041","url":null,"abstract":"Randomized pretest, posttest, follow-up (RPPF) designs are often used for evaluating the effectiveness of an intervention. These designs typically address two primary research questions: (1) Do the treatment and control groups differ in the amount of change from pretest to posttest? and (2) Do the treatment and control groups differ in the amount of change from posttest to follow-up? This study presents a model for answering these questions and compares it to recently proposed models for analyzing RPPF designs due to Mun, von Eye, and White (2009) using Monte Carlo simulation. The proposed model provides increased power over previous models for evaluating group differences in RPPF designs.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimation of and Confidence Interval Formation for Reliability Coefficients of Homogeneous Measurement Instruments","authors":"Ken Kelley, Ying Cheng","doi":"10.1027/1614-2241/A000036","DOIUrl":"https://doi.org/10.1027/1614-2241/A000036","url":null,"abstract":"The reliability of a composite score is a fundamental and important topic in the social and behavioral sciences. The most commonly used reliability estimate of a composite score is coefficient a. However, under regularity conditions, the population value of coefficient a is only a lower bound on the population reliability, unless the items are essentially s-equivalent, an assumption that is likely violated in most applications. A generalization of coefficient a, termed x, is discussed and generally recommended. Furthermore, a point estimate itself almost certainly differs from the population value. Therefore, it is important to provide confidence interval limits so as not to overinterpret the point estimate. Analytic and bootstrap methods are described in detail for confidence interval construction for x .W e go on to recommend the bias-corrected bootstrap approach for x and provide open source and freely available R functions via the MBESS package to implement the methods discussed.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Gajewski, Valorie Coffland, D. Boyle, M. Bott, L. Price, Jamie Leopold, N. Dunton
{"title":"Assessing Content Validity Through Correlation and Relevance Tools A Bayesian Randomized Equivalence Experiment","authors":"B. Gajewski, Valorie Coffland, D. Boyle, M. Bott, L. Price, Jamie Leopold, N. Dunton","doi":"10.1027/1614-2241/A000040","DOIUrl":"https://doi.org/10.1027/1614-2241/A000040","url":null,"abstract":"Content validity elicits expert opinion regarding items of a psychometric instrument. Expert opinion can be elicited in many forms: for example, how essential an item is or its relevancy to a domain. This study developed an alternative tool that elicits expert opinion regarding correlations between each item and its respective domain. With 109 Registered Nurse (RN) site coordinators from National Database of Nursing Quality Indicators, we implemented a randomized Bayesian equivalence trial with coordinators completing ''relevance'' or ''correlation'' content tools regarding the RN Job Enjoyment Scale. We confirmed our hypothesis that the two tools would result in equivalent content information. A Bayesian ordered analysis model supported the results, suggesting that evidence for traditional content validity indices can be justified using correlation arguments.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploiting Prior Information in Stochastic Knowledge Assessment","authors":"J. Heller, Claudia Repitsch","doi":"10.1027/1614-2241/A000035","DOIUrl":"https://doi.org/10.1027/1614-2241/A000035","url":null,"abstract":"Various adaptive procedures for efficiently assessing the knowledge state of an individual have been developed within the theory of knowledge structures. These procedures set out to draw a detailed picture of an individual’s knowledge in a certain field by posing a minimal number of questions. While research so far mostly emphasized theoretical issues, the present paper focuses on an empirical evaluation of probabilistic assessment. It reports on simulation data showing that both efficiency and accuracy of the assessment exhibit considerable sensitivity to the choice of parameters and prior information as captured by the initial likelihood of the knowledge states. In order to deal with problems that arise from incorrect prior information, an extension of the probabilistic assessment is proposed. Systematic simulations provide evidence for the efficiency and robustness of the proposed extension, as well as its feasibility in terms of computational costs.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Impact of Controlling for Extreme Responding on Measurement Equivalence in Cross-Cultural Research","authors":"M. Morren, J. Gelissen, J. Vermunt","doi":"10.1027/1614-2241/A000048","DOIUrl":"https://doi.org/10.1027/1614-2241/A000048","url":null,"abstract":"Prior research has shown that extreme response style can seriously bias responses to survey questions and that this response style may differ across culturally diverse groups. Consequently, cross-cultural differences in extreme responding may yield incomparable responses when not controlled for. To examine how extreme responding affects the cross-cultural comparability of survey responses, we propose and apply a multiple-group latent class approach where groups are compared on basis of the factor loadings, intercepts, and factor means in a Latent Class Factor Model. In this approach a latent factor measuring the response style is explicitly included as an explanation for group differences found in the data. Findings from two empirical applications that examine the cross-cultural comparability of measurements show that group differences in responding import inequivalence in measurements among groups. Controlling for the response style yields more equivalent measurements. This finding emphasizes the importa...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Note on the Connection Between Knowledge Structures and Latent Class Models","authors":"A. Ünlü","doi":"10.1027/1614-2241/A000023","DOIUrl":"https://doi.org/10.1027/1614-2241/A000023","url":null,"abstract":"Schrepp (2005) points out and builds upon the connection between knowledge space theory (KST) and latent class analysis (LCA) to propose a method for constructing knowledge structures from data. Candidate knowledge structures are generated, they are considered as restricted latent class models and fitted to the data, and the BIC is used to choose among them. This article adds additional information about the relationship between KST and LCA. It gives a more comprehensive overview of the literature and the probabilistic models that are at the interface of KST and LCA. KST and LCA are also compared with regard to parameter estimation and model testing methodologies applied in their fields. This article concludes with an overview of KST-related publications addressing the outlined connection and presents further remarks about possible future research arising from a connection of KST to other latent variable modeling approaches.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2011-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edixon J. Chacón, Jesús M Alvarado, C. Santisteban
{"title":"A simulation procedure for the generation of samples to evaluate goodness of fit indices in item response theory models.","authors":"Edixon J. Chacón, Jesús M Alvarado, C. Santisteban","doi":"10.1027/1614-2241/A000022","DOIUrl":"https://doi.org/10.1027/1614-2241/A000022","url":null,"abstract":"The LISREL8.8/PRELIS2.81 program can carry out ordinal factorial analysis (OFA command), with full information maximum likelihood methods, in a data set containing n samples obtained by simulation. Nevertheless, when the replication number is greater than 1, an error command is produced, which impedes reaching solutions that can execute normal (NOR) and logistic (POM) functions. This paper proposes a new procedure of data simulation in PRELIS-LISREL. This procedure permits the generation of n replications and the calculation of the goodness of fit (GOF) indices in the item response theory (IRT) models for each replication, thus allowing the execution of the OFA command for Monte Carlo simulations. The solutions using underlying variable (weighted least squares (WLS) estimation method) and IRT approaches are compared.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2011-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Systematic Literature Review of the Applications of Q-Technique and Its Methodology","authors":"Fiona Dziopa, K. Ahern","doi":"10.1027/1614-2241/A000021","DOIUrl":"https://doi.org/10.1027/1614-2241/A000021","url":null,"abstract":"Q-methodology is a technique incorporating the benefits of both qualitative and quantitative research. Q-method involves Q-sorting, a method of data collection and factor analysis, to assess subjective (qualitative) information. The use of Q-sorting and factor analysis has often resulted in the misconception that Q-methodology involves psychometric or quantitative assessment, although Q as a methodology actually enables the systematic assessment of qualitative data. Misconceptions regarding Q have resulted in a heterogeneous collection of Q-applications in the extant literature, which has obscured the fundamental principles of Q-methodology. The purpose of this paper is to present a systematic review of Q-based research to investigate the criteria researchers have used to develop Q-studies. Published research studies between January 2008 and December 2008 that employed Q-techniques and methodology were assessed. Data were extracted and synthesized through the development and use of the Assessment and Revi...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2011-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1027/1614-2241/A000021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Probability-Based and Measurement- Related Hypotheses With Full Restriction for Investigations by Means of Confirmatory Factor Analysis An Example From Cognitive Psychology","authors":"K. Schweizer","doi":"10.1027/1614-2241/A000033","DOIUrl":"https://doi.org/10.1027/1614-2241/A000033","url":null,"abstract":"Probability-based and measurement-related hypotheses for confirmatory factor analysis of repeated-measures data are investigated. Such hypotheses comprise precise assumptions concerning the relationships among the true components associated with the levels of the design or the items of the measure. Measurement-related hypotheses concentrate on the assumed processes, as, for example, transformation and memory processes, and represent treatment-dependent differences in processing. In contrast, probability-based hypotheses provide the opportunity to consider probabilities as outcome predictions that summarize the effects of various influences. The prediction of performance guided by inexact cues serves as an example. In the empirical part of this paper probability-based and measurement-related hypotheses are applied to working-memory data. Latent variables according to both hypotheses contribute to a good model fit. The best model fit is achieved for the model including latent variables that represented seri...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emanuel Schmider, M. Ziegler, Erik Danay, Luzi Beyer, M. Bühner
{"title":"Is It Really Robust","authors":"Emanuel Schmider, M. Ziegler, Erik Danay, Luzi Beyer, M. Bühner","doi":"10.1027/1614-2241/A000016","DOIUrl":"https://doi.org/10.1027/1614-2241/A000016","url":null,"abstract":"Empirical evidence to the robustness of the analysis of variance (ANOVA) concerning violation of the normality assumption is presented by means of Monte Carlo methods. High-quality samples underlying normally, rectangularly, and exponentially distributed basic populations are created by drawing samples which consist of random numbers from respective generators, checking their goodness of fit, and allowing only the best 10% to take part in the investigation. A one-way fixed-effect design with three groups of 25 values each is chosen. Effect-sizes are implemented in the samples and varied over a broad range. Comparing the outcomes of the ANOVA calculations for the different types of distributions, gives reason to regard the ANOVA as robust. Both, the empirical type I error α and the empirical type II error β remain constant under violation. Moreover, regression analysis identifies the factor “type of distribution” as not significant in explanation of the ANOVA results.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2010-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1027/1614-2241/A000016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}