Larry V. Hedges, Elizabeth Tipton, Rrita Zejnullahi, Karina G. Diaz
{"title":"Effect sizes in ANCOVA and difference-in-differences designs","authors":"Larry V. Hedges, Elizabeth Tipton, Rrita Zejnullahi, Karina G. Diaz","doi":"10.1111/bmsp.12296","DOIUrl":"10.1111/bmsp.12296","url":null,"abstract":"<p>It is common practice in both randomized and quasi-experiments to adjust for baseline characteristics when estimating the average effect of an intervention. The inclusion of a pre-test, for example, can reduce both the standard error of this estimate and—in non-randomized designs—its bias. At the same time, it is also standard to report the effect of an intervention in standardized effect size units, thereby making it comparable to other interventions and studies. Curiously, the estimation of this effect size, including covariate adjustment, has received little attention. In this article, we provide a framework for defining effect sizes in designs with a pre-test (e.g., difference-in-differences and analysis of covariance) and propose estimators of those effect sizes. The estimators and approximations to their sampling distributions are evaluated using a simulation study and then demonstrated using an example from published data.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":"76 2","pages":"259-282"},"PeriodicalIF":2.6,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9254019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Which method delivers greater signal-to-noise ratio: Structural equation modelling or regression analysis with weighted composites?","authors":"Ke-Hai Yuan, Yongfei Fang","doi":"10.1111/bmsp.12293","DOIUrl":"10.1111/bmsp.12293","url":null,"abstract":"<p>Observational data typically contain measurement errors. Covariance-based structural equation modelling (CB-SEM) is capable of modelling measurement errors and yields consistent parameter estimates. In contrast, methods of regression analysis using weighted composites as well as a partial least squares approach to SEM facilitate the prediction and diagnosis of individuals/participants. But regression analysis with weighted composites has been known to yield attenuated regression coefficients when predictors contain errors. Contrary to the common belief that CB-SEM is the preferred method for the analysis of observational data, this article shows that regression analysis via weighted composites yields parameter estimates with much smaller standard errors, and thus corresponds to greater values of the signal-to-noise ratio (SNR). In particular, the SNR for the regression coefficient via the least squares (LS) method with equally weighted composites is mathematically greater than that by CB-SEM if the items for each factor are parallel, even when the SEM model is correctly specified and estimated by an efficient method. Analytical, numerical and empirical results also show that LS regression using weighted composites performs as well as or better than the normal maximum likelihood method for CB-SEM under many conditions even when the population distribution is multivariate normal. Results also show that the LS regression coefficients become more efficient when considering the sampling errors in the weights of composites than those that are conditional on weights.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":"76 3","pages":"646-678"},"PeriodicalIF":2.6,"publicationDate":"2022-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41180529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Empirical indistinguishability: From the knowledge structure to the skills","authors":"Andrea Spoto, Luca Stefanutti","doi":"10.1111/bmsp.12291","DOIUrl":"10.1111/bmsp.12291","url":null,"abstract":"<p>Recent literature has pointed out that the basic local independence model (BLIM) when applied to some specific instances of knowledge structures presents identifiability issues. Furthermore, it has been shown that for such instances the model presents a stronger form of unidentifiability named empirical indistinguishability, which leads to the fact that the existence of certain knowledge states in such structures cannot be empirically tested. In this article the notion of indistinguishability is extended to skill maps and, more generally, to the competence-based knowledge space theory. Theoretical results are provided showing that skill maps can be empirically indistinguishable from one another. The most relevant consequence of this is that for some skills there is no empirical evidence to establish their existence. This result is strictly related to the type of probabilistic model investigated, which is essentially the BLIM. Alternative models may exist or can be developed in knowledge space theory for which this indistinguishability problem disappears.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":"76 2","pages":"312-326"},"PeriodicalIF":2.6,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bmsp.12291","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9254578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A note on the use of rank-ordered logit models for ordered response categories","authors":"Timothy R. Johnson","doi":"10.1111/bmsp.12292","DOIUrl":"10.1111/bmsp.12292","url":null,"abstract":"<p>Models for rankings have been shown to produce more efficient estimators than comparable models for first/top choices. The discussions and applications of these models typically only consider unordered alternatives. But these models can be usefully adapted to the case where a respondent ranks a set of ordered alternatives that are ordered response categories. This paper proposes eliciting a rank order that is consistent with the ordering of the response categories, and then modelling the observed rankings using a variant of the rank ordered logit model where the distribution of rankings has been truncated to the set of admissible rankings. This results in lower standard errors in comparison to when only a single top category is selected by the respondents. And the restrictions on the set of admissible rankings reduces the number of decisions needed to be made by respondents in comparison to ranking a set of unordered alternatives. Simulation studies and application examples featuring models based on a stereotype regression model and a rating scale item response model are provided to demonstrate the utility of this approach.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":"76 1","pages":"236-256"},"PeriodicalIF":2.6,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10510690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subtask analysis of process data through a predictive model","authors":"Zhi Wang, Xueying Tang, Jingchen Liu, Zhiliang Ying","doi":"10.1111/bmsp.12290","DOIUrl":"10.1111/bmsp.12290","url":null,"abstract":"<p>Response process data collected from human–computer interactive items contain detailed information about respondents' behavioural patterns and cognitive processes. Such data are valuable sources for analysing respondents' problem-solving strategies. However, the irregular data format and the complex structure make standard statistical tools difficult to apply. This article develops a computationally efficient method for exploratory analysis of such process data. The new approach segments a lengthy individual process into a sequence of short subprocesses to achieve complexity reduction, easy clustering and meaningful interpretation. Each subprocess is considered a subtask. The segmentation is based on sequential action predictability using a parsimonious predictive model combined with the Shannon entropy. Simulation studies are conducted to assess the performance of the new method. We use a case study of PIAAC 2012 to demonstrate how exploratory analysis for process data can be carried out with the new approach.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":"76 1","pages":"211-235"},"PeriodicalIF":2.6,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9075644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two efficient selection methods for high-dimensional CD-CAT utilizing max-marginals factor from MAP query and ensemble learning approach","authors":"Fen Luo, Xiaoqing Wang, Yan Cai, Dongbo Tu","doi":"10.1111/bmsp.12288","DOIUrl":"10.1111/bmsp.12288","url":null,"abstract":"<p>Computerized adaptive testing for cognitive diagnosis (CD-CAT) needs to be efficient and responsive in real time to meet practical applications' requirements. For high-dimensional data, the number of categories to be recognized in a test grows exponentially as the number of attributes increases, which can easily cause system reaction time to be too long such that it adversely affects the examinees and thus seriously impacts the measurement efficiency. More importantly, the long-time CPU operations and memory usage of item selection in CD-CAT due to intensive computation are impractical and cannot wholly meet practice needs. This paper proposed two new efficient selection strategies (HIA and CEL) for high-dimensional CD-CAT to address this issue by incorporating the max-marginals from the maximum a posteriori query and integrating the ensemble learning approach into the previous efficient selection methods, respectively. The performance of the proposed selection method was compared with the conventional selection method using simulated and real item pools. The results showed that the proposed methods could significantly improve the measurement efficiency with about 1/2–1/200 of the conventional methods' computation time while retaining similar measurement accuracy. With increasing number of attributes and size of the item pool, the computation time advantage of the proposed methods becomes more significant.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":"76 2","pages":"283-311"},"PeriodicalIF":2.6,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9254138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dungang Liu, Xiaorui Zhu, Brandon Greenwell, Zewei Lin
{"title":"A new goodness-of-fit measure for probit models: Surrogate R2","authors":"Dungang Liu, Xiaorui Zhu, Brandon Greenwell, Zewei Lin","doi":"10.1111/bmsp.12289","DOIUrl":"https://doi.org/10.1111/bmsp.12289","url":null,"abstract":"<p>Probit models are used extensively for inferential purposes in the social sciences as discrete data are prevalent in a vast body of social studies. Among many accompanying model inference problems, a critical question remains unsettled: how to develop a goodness-of-fit measure that resembles the ordinary least square (OLS) <i>R</i><sup>2</sup> used for linear models. Such a measure has long been sought to achieve ‘comparability’ of different empirical models across multiple samples addressing similar social questions. To this end, we propose a novel <i>R</i><sup>2</sup> measure for probit models using the notion of surrogacy – simulating a continuous variable <math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>S</mi>\u0000 </mrow>\u0000 </semantics></math> as a <i>surrogate</i> of the original discrete response (Liu & Zhang, Journal of the American Statistical Association, 113, 845 and 2018). The proposed <i>R</i><sup>2</sup> is the proportion of the variance of the surrogate response explained by explanatory variables through a <i>linear model</i>, and we call it a surrogate <i>R</i><sup>2</sup>. This paper shows both theoretically and numerically that the surrogate <i>R</i><sup>2</sup> approximates the OLS <i>R</i><sup>2</sup> based on the latent continuous variable, preserves the interpretation of explained variation, and maintains monotonicity between nested models. As no other pseudo <i>R</i><sup>2</sup>, McKelvey and Zavoina's and McFadden's included, can meet all the three criteria simultaneously, our measure fills this crucial void in probit model inference.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":"76 1","pages":"192-210"},"PeriodicalIF":2.6,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bmsp.12289","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50136106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Penalization approaches in the conditional maximum likelihood and Rasch modelling context","authors":"Can Gürer, Clemens Draxler","doi":"10.1111/bmsp.12287","DOIUrl":"10.1111/bmsp.12287","url":null,"abstract":"<p>Recent detection methods for Differential Item Functioning (DIF) include approaches like Rasch Trees, DIFlasso, GPCMlasso and Item Focussed Trees, all of which - in contrast to well established methods - can handle metric covariates inducing DIF. A new estimation method shall address their downsides by mainly aiming at combining three central virtues: the use of conditional likelihood for estimation, the incorporation of linear influence of metric covariates on item difficulty and the possibility to detect different DIF types: certain items showing DIF, certain covariates inducing DIF, or certain covariates inducing DIF in certain items. Each of the approaches mentioned lacks in two of these aspects. We introduce a method for DIF detection, which firstly utilizes the conditional likelihood for estimation combined with group Lasso-penalization for item or variable selection and L1-penalization for interaction selection, secondly incorporates linear effects instead of approximation through step functions, and thirdly provides the possibility to investigate any of the three DIF types. The method is described theoretically, challenges in implementation are discussed. A dataset is analysed for all DIF types and shows comparable results between methods. Simulation studies per DIF type reveal competitive performance of cmlDIFlasso, particularly when selecting interactions in case of large sample sizes and numbers of parameters. Coupled with low computation times, cmlDIFlasso seems a worthwhile option for applied DIF detection.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":"76 1","pages":"154-191"},"PeriodicalIF":2.6,"publicationDate":"2022-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10861048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ordinal state-trait regression for intensive longitudinal data","authors":"Prince P. Osei, Philip T. Reiss","doi":"10.1111/bmsp.12285","DOIUrl":"10.1111/bmsp.12285","url":null,"abstract":"<p>In many psychological studies, in particular those conducted by experience sampling, mental states are measured repeatedly for each participant. Such a design allows for regression models that separate between- from within-person, or trait-like from state-like, components of association between two variables. But these models are typically designed for continuous variables, whereas mental state variables are most often measured on an ordinal scale. In this paper we develop a model for disaggregating between- from within-person effects of one ordinal variable on another. As in standard ordinal regression, our model posits a continuous latent response whose value determines the observed response. We allow the latent response to depend nonlinearly on the trait and state variables, but impose a novel penalty that shrinks the fit towards a linear model on the latent scale. <span>A simulation study shows that this penalization approach is effective at finding a middle ground between an overly restrictive linear model and an overfitted nonlinear model. The proposed method is illustrated with an application to data from the experience sampling study of</span> Baumeister et al. (2020, <i>Personality and Social Psychology Bulletin</i>, 46, 1631).</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":"76 1","pages":"1-19"},"PeriodicalIF":2.6,"publicationDate":"2022-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://bpspsychub.onlinelibrary.wiley.com/doi/epdf/10.1111/bmsp.12285","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9279382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}