{"title":"Inferences of associated latent variables by the observable test scores.","authors":"Rudy Ligtvoet","doi":"10.1111/bmsp.70002","DOIUrl":"https://doi.org/10.1111/bmsp.70002","url":null,"abstract":"<p><p>Test scores, like the sum score, can be useful for making inferences about the latent variables. The conditions under which such test scores allow for inferences of the latent variables based on a \"weaker\" stochastic ordering are generalized to any monotone latent variable model for which the latent variables are associated. The generality of these conditions places the sum score, or indeed any test score, well beyond a mere intuitive measure or a relic from classical test theory.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144327806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Testing the validity of instrumental variables in just-identified linear non-Gaussian models.","authors":"Wolfgang Wiedermann, Dexin Shi","doi":"10.1111/bmsp.70000","DOIUrl":"https://doi.org/10.1111/bmsp.70000","url":null,"abstract":"<p><p>Instrumental variable (IV) estimation constitutes a powerful quasi-experimental tool to estimate causal effects in observational data. The IV approach, however, rests on two crucial assumptions-the instrument relevance assumption and the exclusion restriction assumption. The latter requirement (stating that the IV is not allowed to be related to the outcome via any path other than the one going through the predictor), cannot be empirically tested in just-identified models (i.e. models with as many IVs as predictors). The present study introduces properties of non-Gaussian IV models which enable one to test whether hidden confounding between an IV and the outcome is present. Detecting exclusion restriction violations due to a direct path between the IV and the outcome, however, is restricted to the over-identified case. Based on these insights, a two-step approach is presented to test IV validity against hidden confounding in just-identified models. The performance of the approach was evaluated using Monte-Carlo simulation experiments. An empirical example from psychological research is given to illustrate the approach in practice. Recommendations for best-practice applications and future research directions are discussed. Although the current study presents important insights for developing diagnostic procedures for IV models, sound universal IV validation in the just-identified case remains a challenging task.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144310869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francis Tuerlinckx, Peter Kuppens, Sigert Ariens, Leonie Cloos, Egon Dejonckheere, Ginette Lafit, Koen Niemeijer, Jordan Revol, Evelien Schat, Marieke Schreuder, Niels Vanhasbroeck, Eva Ceulemans
{"title":"New developments in experience sampling methodology.","authors":"Francis Tuerlinckx, Peter Kuppens, Sigert Ariens, Leonie Cloos, Egon Dejonckheere, Ginette Lafit, Koen Niemeijer, Jordan Revol, Evelien Schat, Marieke Schreuder, Niels Vanhasbroeck, Eva Ceulemans","doi":"10.1111/bmsp.12398","DOIUrl":"https://doi.org/10.1111/bmsp.12398","url":null,"abstract":"<p><p>Experience Sampling Methodology (ESM) has been widely used over the past decades to study feelings, behaviour and thoughts as they occur in daily life. Typically, participants complete several assessments per day via a smartphone for multiple days. The growing adoption of ESM has spurred a number of methodological advancements. In this paper, we provide an overview of recent developments in ESM design, statistical analysis and implementation. In terms of design, we discuss considerations around what to measure-including the reliability and validity of self-report measures as well as mobile sensing-as well as when to measure, where we focus on the pros and cons of burst designs and advances in sample size planning methodology. Regarding statistical analysis, we highlight non-linear models, survival analysis for understanding time-to-event data and real-time monitoring of ESM time series. At the implementation level, we address open science practices and advances in data preprocessing. Although most of the topics discussed in this paper are generic, many of the examples are focused on the study of affect in daily life.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144259430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Bolsinova, Bence Gergely, Matthieu J S Brinkhuis
{"title":"Keeping Elo alive: Evaluating and improving measurement properties of learning systems based on Elo ratings.","authors":"Maria Bolsinova, Bence Gergely, Matthieu J S Brinkhuis","doi":"10.1111/bmsp.12395","DOIUrl":"https://doi.org/10.1111/bmsp.12395","url":null,"abstract":"<p><p>The Elo Rating System which originates from competitive chess has been widely utilised in large-scale online educational applications where it is used for on-the-fly estimation of ability, item calibration, and adaptivity. In this paper, we aim to critically analyse the shortcomings of the Elo rating system in an educational context, shedding light on its measurement properties and when these may fall short in accurately capturing student abilities and item difficulties. In a simulation study, we look at the asymptotic properties of the Elo rating system. Our results show that the Elo ratings are generally not unbiased and their variances are context-dependent. Furthermore, in scenarios where items are selected adaptively based on the current ratings and the item difficulties are updated alongside the student abilities, the variance of the ratings across items and students artificially increases over time and as a result the ratings do not converge. We propose a solution to this problem which entails using two parallel chains of ratings which remove the dependence of item selection on the current errors in the ratings.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan I Failenschmid, Leonie V D E Vogelsmeier, Joris Mulder, Joran Jongerling
{"title":"Modelling non-linear psychological processes: Reviewing and evaluating non-parametric approaches and their applicability to intensive longitudinal data.","authors":"Jan I Failenschmid, Leonie V D E Vogelsmeier, Joris Mulder, Joran Jongerling","doi":"10.1111/bmsp.12397","DOIUrl":"https://doi.org/10.1111/bmsp.12397","url":null,"abstract":"<p><p>Psychological concepts are increasingly understood as complex dynamic systems that change over time. To study these complex systems, researchers are increasingly gathering intensive longitudinal data (ILD), revealing non-linear phenomena such as asymptotic growth, mean-level switching, and regulatory oscillations. However, psychological researchers currently lack advanced statistical methods that are flexible enough to capture these non-linear processes accurately, which hinders theory development. While methods such as local polynomial regression, Gaussian processes and generalized additive models (GAMs) exist outside of psychology, they are rarely applied within the field because they have not yet been reviewed accessibly and evaluated within the context of ILD. To address this important gap, this article introduces these three methods for an applied psychological audience. We further conducted a simulation study, which demonstrates that all three methods infer non-linear processes that have been found in ILD more accurately than polynomial regression. Particularly, GAMs closely captured the underlying processes, performing almost as well as the data-generating parametric models. Finally, we illustrate how GAMs can be applied to explore idiographic processes and identify potential phenomena in ILD. This comprehensive analysis empowers psychological researchers to model non-linear processes accurately and select a method that aligns with their data and research goals.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144180327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A path signature perspective of process data feature extraction.","authors":"Xueying Tang, Jingchen Liu, Zhiliang Ying","doi":"10.1111/bmsp.12390","DOIUrl":"https://doi.org/10.1111/bmsp.12390","url":null,"abstract":"<p><p>Computer-based interactive items have become prevalent in recent educational assessments. In such items, the entire human-computer interactive process is recorded in a log file and is known as the response process. These data are noisy, diverse, and in a nonstandard format. Several feature extraction methods have been developed to overcome the difficulties in process data analysis. However, these methods often focus on the action sequence and ignore the time sequence in response processes. In this paper, we introduce a new feature extraction method that incorporates the information in both the action sequence and the response time sequence. The method is based on the concept of path signature from stochastic analysis. We apply the proposed method to both simulated data and real response process data from PIAAC. A prediction framework is used to show that taking time information into account provides a more comprehensive understanding of respondents' behaviors.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A supervised learning approach to estimating IRT models in small samples.","authors":"Dmitry I Belov, Oliver Lüdtke, Esther Ulitzsch","doi":"10.1111/bmsp.12396","DOIUrl":"https://doi.org/10.1111/bmsp.12396","url":null,"abstract":"<p><p>Existing estimators of parameters of item response theory (IRT) models exploit the likelihood function. In small samples, however, the IRT likelihood oftentimes contains little informative value, potentially resulting in biased and/or unstable parameter estimates and large standard errors. To facilitate small-sample IRT estimation, we introduce a novel approach that does not rely on the likelihood. Our estimation approach derives features from response data and then maps the features to item parameters using a neural network (NN). We describe and evaluate our approach for the three-parameter logistic model; however, it is applicable to any model with an item characteristic curve. Three types of NNs are developed, supporting the obtainment of both point estimates and confidence intervals for IRT model parameters. The results of a simulation study demonstrate that these NNs perform better than Bayesian estimation using Markov chain Monte Carlo methods in terms of the quality of the point estimates and confidence intervals while also being much faster. These properties facilitate (1) pretesting items in a real-time testing environment, (2) pretesting more items and (3) pretesting items only in a secured environment to eradicate possible compromise of new items in online testing.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144082247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steffen Zitzmann, Christoph Lindner, Julian F Lohmann, Martin Hecht
{"title":"A novel nonvisual procedure for screening for nonstationarity in time series as obtained from intensive longitudinal designs.","authors":"Steffen Zitzmann, Christoph Lindner, Julian F Lohmann, Martin Hecht","doi":"10.1111/bmsp.12394","DOIUrl":"https://doi.org/10.1111/bmsp.12394","url":null,"abstract":"<p><p>Researchers working with intensive longitudinal designs often encounter the challenge of determining whether to relax the assumption of stationarity in their models. Given that these designs typically involve data from a large number of subjects ( <math> <semantics><mrow><mi>N</mi> <mo>≫</mo> <mn>1</mn></mrow> <annotation>$$ Ngg 1 $$</annotation></semantics> </math> ), visual screening all time series can quickly become tedious. Even when conducted by experts, such screenings can lack accuracy. In this article, we propose a nonvisual procedure that enables fast and accurate screening. This procedure has potential to become a widely adopted approach for detecting nonstationarity and guiding model building in psychology and related fields, where intensive longitudinal designs are used and time series data are collected.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pablo Nájera, Rodrigo S Kreitchmann, Scarlett Escudero, Francisco J Abad, Jimmy de la Torre, Miguel A Sorrel
{"title":"A general diagnostic modelling framework for forced-choice assessments.","authors":"Pablo Nájera, Rodrigo S Kreitchmann, Scarlett Escudero, Francisco J Abad, Jimmy de la Torre, Miguel A Sorrel","doi":"10.1111/bmsp.12393","DOIUrl":"https://doi.org/10.1111/bmsp.12393","url":null,"abstract":"<p><p>Diagnostic classification modelling (DCM) is a family of restricted latent class models often used in educational settings to assess students' strengths and weaknesses. Recently, there has been growing interest in applying DCM to noncognitive traits in fields such as clinical and organizational psychology, as well as personality profiling. To address common response biases in these assessments, such as social desirability, Huang (2023, Educational and Psychological Measurement, 83, 146) adopted the forced-choice (FC) item format within the DCM framework, developing the FC-DCM. This model assumes that examinees with no clear preference for any statements in an FC block will choose completely at random. Additionally, the unique parametrization of the FC-DCM poses challenges for integration with established DCM frameworks in the literature. In the present study, we enhance the capabilities of DCM by introducing a general diagnostic framework for FC assessments. We present an adaptation of the G-DINA model to accommodate FC responses. Simulation results show that the G-DINA model provides accurate classifications, item parameter estimates and attribute correlations, outperforming the FC-DCM in realistic scenarios where item discrimination varies. A real FC assessment example further illustrates the better model fit of the G-DINA. Practical recommendations for using the FC format in diagnostic assessments of noncognitive traits are provided.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144035912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Score-based tests for parameter instability in ordinal factor models.","authors":"Franz Classe, Rudolf Debelak, Christoph Kern","doi":"10.1111/bmsp.12392","DOIUrl":"https://doi.org/10.1111/bmsp.12392","url":null,"abstract":"<p><p>We present a novel approach for computing model scores for ordinal factor models, that is, graded response models (GRMs) fitted with a limited information (LI) estimator. The method makes it possible to compute score-based tests for parameter instability for ordinal factor models. This way, rapid execution of numerous parameter instability tests for multidimensional item response theory (MIRT) models is facilitated. We present a comparative analysis of the performance of the proposed score-based tests for ordinal factor models in comparison to tests for GRMs fitted with a full information (FI) estimator. The new method has a good Type I error rate, high power and is computationally faster than FI estimation. We further illustrate that the proposed method works well with complex models in real data applications. The method is implemented in the lavaan package in R.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144052855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}