{"title":"Interactive Parallel Models: No Virginia, Violation of Miller's Race Inequality does not Imply Coactivation and Yes Virginia, Context Invariance is Testable","authors":"J. Townsend, Yanjun Liu, Ru Zhang, M. Wenger","doi":"10.20982/tqmp.16.2.p192","DOIUrl":"https://doi.org/10.20982/tqmp.16.2.p192","url":null,"abstract":"One vein of our research on psychological systems has focused on parallel processing models in disjunctive (OR) and conjunctive (AND) stopping-rule designs. One branch of that research has emphasized that a common strategy of inference in the OR situations is logically flawed. That strategy equates a violation of the popular Miller race bound with a coactive parallel system. Pointedly, Townsend & Nozawa (1997) revealed that even processing systems associated with extreme limited capacity are capable of violating that bound. With regard to the present investigation, previous theoretical work has proven that interactive parallel models with separate decision criteria on each channel can readily evoke capacity sufficiently super to violate that bound (e. g., Colonius & Townsend, 1997; Townsend & Nozawa, 1995; Townsend & Wenger, 2004). In addition, we have supplemented the usual OR task with an AND task to seek greater testability of architectural, decisional, and capacity mechanisms (e. g., Eidels et al., 2011; Eidels et al., 2015). The present study presents a broad meta-theoretical structure within which the past and new theoretical results are embedded. We further exploit the broad class of stochastic linear systems and discover that inter-esting classical results from Colonius (1990) can be given an elegant process interpretation within that class. In addition, we learn that conjoining OR with AND data affords an experimental test of the crucial assumption of context invariance, long thought to be untestable.","PeriodicalId":93055,"journal":{"name":"The quantitative methods for psychology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46975147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evidence accumulation models with R: A practical guide to hierarchical Bayesian methods","authors":"Yi Lin, L. Strickland","doi":"10.20982/TQMP.16.2.P133","DOIUrl":"https://doi.org/10.20982/TQMP.16.2.P133","url":null,"abstract":"Evidence accumulation models are a useful tool to allow researchers to investigate the latent cognitive variables that underlie response time and response accuracy. However, applying evidence accumulation models can be difficult because they lack easily computable forms. Numerical methods are required to determine the parameters of evidence accumulation that best correspond to the fitted data. When applied to complex cognitive models, such numerical methods can require substantial computational power which can lead to infeasibly long compute times. In this paper, we provide efficient, practical software and a step-by-step guide to fit evidence accumulation models with Bayesian methods. The software, written in C++, is provided in an R package: 'ggdmc'. The software incorporates three important ingredients of Bayesian computation, (1) the likelihood functions of two common response time models, (2) the Markov chain Monte Carlo (MCMC) algorithm (3) a population-based MCMC sampling method. The software has gone through stringent checks to be hosted on the Comprehensive R Archive Network (CRAN) and is free to download. We illustrate its basic use and an example of fitting complex hierarchical Wiener diffusion models to four shooting-decision data sets.","PeriodicalId":93055,"journal":{"name":"The quantitative methods for psychology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47989676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualizing Items and Measures: An Overview and Demonstration of the Kernel Smoothing Item Response Theory Technique","authors":"Gordana Rajlic","doi":"10.31234/osf.io/j3btw","DOIUrl":"https://doi.org/10.31234/osf.io/j3btw","url":null,"abstract":"Motivated by a renewed interest in exploratory data analysis and data visualization in psychology and social sciences, the current demonstration was conducted to familiarize a broader audience of applied researchers with the benefits of an exploratory psychometric technique – kernel smoothing item response theory (KSIRT). A data-driven, nonparametric KSIRT provides a visual representation of the characteristics of the items in a measure (scale or test) and offers convenient preliminary feedback about functioning of the items and the measure in a particular research context. The technique could be a useful addition to the analytical toolkit of applied researchers that work with a range of measures, within the classical test theory or IRT framework, and is suitable for use with a smaller number of items or respondents compared to parametric IRT models. KSIRT is described and its use is demonstrated with a set of items from a psychological well-being measure. A recently developed, easy to use R package was utilized to perform the analyses and the R code is included in the manuscript.","PeriodicalId":93055,"journal":{"name":"The quantitative methods for psychology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47210014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Tutorial on Testing the Equality of Standardized Regression Coefficients in Structural Equation Models using Wald Tests with lavaan","authors":"E. Klopp","doi":"10.31234/osf.io/c6mjn","DOIUrl":"https://doi.org/10.31234/osf.io/c6mjn","url":null,"abstract":"Comparing the effects of two or more explanatory variables on a dependent variable in structural equation models, with either manifest or latent variables, may be hampered by the arbitrary metrics which are common in social sciences and psychology. A possible way to compare the effects is the comparison of standardized regression coefficients by means of the Wald test. In this tutorial, we show how a typical textbook display of the Wald test can be used to derive a calculation for standardized regression coefficients. Moreover, we demonstrate how this can be implemented in R using the lavaan package. Additionally, we provide a convenience function that allows doing a Wald test by only setting up equality constraints. We also discuss theoretical aspects and implications when hypotheses about the equality of standardized regression parameters in structural equation models are tested.","PeriodicalId":93055,"journal":{"name":"The quantitative methods for psychology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48621608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How to implement real-time interaction between participants in online surveys: A practical guide to SMARTRIQS","authors":"A. Molnár","doi":"10.31234/osf.io/6wmny","DOIUrl":"https://doi.org/10.31234/osf.io/6wmny","url":null,"abstract":"While online experimentation is becoming increasingly popular in psychology, implementing real-time interaction between participants in online studies remains uniquely challenging, as it requires researchers to have advanced programming skills, or to purchase expensive third-party services. These challenges prevent many psychologists and other social scientists from utilizing online methods when they wish to study a vast array of social behaviors and interpersonal decision-making. SMARTRIQS is a free and open-source application that fills in this crucial gap in contemporary experimental research methods. SMARTRIQS offers researchers the ability to design surveys that feature real-time interaction between participants—including live text chat—without requiring re-searchers to learn any programming language, install any software, or pay for any third-party services. This paper provides researchers a practical guide to SMARTRIQS and shows them howto turn regular (non-interactive) online surveys into fully interactive experiments. The paper not only provides a comprehensive guide to designing interactive studies in SMARTRIQS in general but also walks readers through the step-by-step instructions for setting up a particular study (Dictator Game with chat). The tutorial starts from the very basics, assuming no prior expertise in online experimentation, and is accessible to everyone, even those who are less—or not at all—familiar with Qualtrics.","PeriodicalId":93055,"journal":{"name":"The quantitative methods for psychology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41334538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An LBA account of decisions in the multiple object tracking task","authors":"R. Innes, Caroline L. Kuhne","doi":"10.31234/osf.io/hkj2g","DOIUrl":"https://doi.org/10.31234/osf.io/hkj2g","url":null,"abstract":"Decision making is a vital aspect of our everyday functioning, from simple perceptual demands to more complex and meaningful decisions. The strategy adopted to make such decisions is often viewed as balancing elements of speed and caution, i.e. making fast or careful decisions. Using sequential sampling models to analyse decision making data can allow us to tease apart strategic differences, such as being more or less cautious, from processing differences, which would otherwise be indistinguishable in behavioural data. Our study used a multiple object tracking task where student participants and a highly skilled military group were compared on their ability to track several items at once. Using a mathematical model of decision making (the linear ballistic accumulator), we show the underpinnings of how two groups differ in performance. Results showed a large difference between the groups on accuracy, with the RAAF group outperforming students. An interaction effect was observed between groups and level of difficulty in response times, where RAAF response times slowed at a greater rate than the student group as difficulty increased. Model results indicated that the RAAF personnel were more cautious in their decisions than students, and had faster processing in some conditions. Our study shows the strength of sequential sampling models, as well as providing a first attempt at fitting a sequential sampling model to data from a multiple object tracking task.","PeriodicalId":93055,"journal":{"name":"The quantitative methods for psychology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42677225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Longitudinal item response modeling and posterior predictive checking in {R} and {Stan}","authors":"A. Scharl, Timo Gnambs","doi":"10.20982/tqmp.15.2.p075","DOIUrl":"https://doi.org/10.20982/tqmp.15.2.p075","url":null,"abstract":"aLeibniz Institute for Educational Trajectories bJohannes Kepler University Linz Abstract Item response theory is widely used in a variety of research fields. Among others, it is the de facto standard for test development and calibration in educational large-scale assessments. In this context, longitudinal modeling is of great importance to examine developmental trajectories in competences and identify predictors of academic success. Therefore, this paper describes various multidimensional item response models that can be used in a longitudinal setting and how to estimate change in a Bayesian framework using the statistical software Stan. Moreover, model evaluation techniques such as the widely applicable information criterion and posterior predictive checking with several discrepancy measures suited for Bayesian item response modeling are presented. Finally, an empirical application is described that examines change in mathematical competence between grades 5 and 7 forN = 1, 371 German students using a Bayesian longitudinal item response model.","PeriodicalId":93055,"journal":{"name":"The quantitative methods for psychology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47525281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nathaniel J. Ratcliff, Devika T. Mahoney-Nair, Joshua Goldstein
{"title":"The Area of Resilience to Stress Event ({ARSE}): A New Method for Quantifying the Process of Resilience","authors":"Nathaniel J. Ratcliff, Devika T. Mahoney-Nair, Joshua Goldstein","doi":"10.20982/tqmp.15.2.p148","DOIUrl":"https://doi.org/10.20982/tqmp.15.2.p148","url":null,"abstract":"Research on resilience has been wide-ranging in terms of academic disciplines, outcomes of interest, and levels of analysis. However, given the broad nature of the resilience literature, resilience has been a difficult construct to assess and measure. In the current article, a new method for directly quantifying the resilience process across time is presented based on a foundational conceptual definition derived from the existing resilience literature. The Area of Resilience to Stress Event (ARSE) method utilizes the area created, across time, from deviations of a given baseline following a stress event (i.e., area under the curve). Using an accompanying R package (’arse’) to calculate ARSE, this approach allows researchers a new method of examining resilience for any number of variables of interest. A step-by-step tutorial for this new method is also described in an appendix.","PeriodicalId":93055,"journal":{"name":"The quantitative methods for psychology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46829697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RSE-box: An analysis and modelling package to study response times to multiple signals","authors":"Thomas U. Otto","doi":"10.20982/tqmp.15.2.p112","DOIUrl":"https://doi.org/10.20982/tqmp.15.2.p112","url":null,"abstract":"This work was supported by the Biotechnology and Biological Sciences Research Council (BBSRC, grant number: BB/N010108/1).","PeriodicalId":93055,"journal":{"name":"The quantitative methods for psychology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48672199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A tutorial on how to compute traditional IAT effects with {R}","authors":"Jessica R IeC ohner, Philipp Thoss","doi":"10.20982/tqmp.15.2.p134","DOIUrl":"https://doi.org/10.20982/tqmp.15.2.p134","url":null,"abstract":"aUniversity of Bamberg, Germany bChemnitz, Germany Abstract The Implicit Association Test (IAT) is the most frequently used and the most popular measure for assessing implicit associations across a large variety of psychological constructs. Altogether, 10 algorithms have been suggested by the founders of the IAT to compute what can be called the traditional IAT effects (i.e., the six D measures: D1, D2, D3, D4, D5, D6, and the four conventional measures [C measures]: C1, C2, C3, C4). Researchers can decide which IAT effect they want to use, whereby the use of D measures is recommended on the basis of their properties. In this tutorial, we explain the background of the 10 traditional IAT effects and their mathematical details. We also present R code as well as example data so that readers can easily compute all of the traditional IAT effects. Last but not least, we present example outputs to illustrate what the results might look like.","PeriodicalId":93055,"journal":{"name":"The quantitative methods for psychology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41710470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}