R. Landers, Elena M. Auer, Gabriel Mersy, Sebastian Marin, Jason Blaik
{"title":"You are what you click: using machine learning to model trace data for psychometric measurement","authors":"R. Landers, Elena M. Auer, Gabriel Mersy, Sebastian Marin, Jason Blaik","doi":"10.1080/15305058.2022.2134394","DOIUrl":null,"url":null,"abstract":"Abstract Assessment trace data, such as mouse positions and their timing, offer interesting and provocative reflections of individual differences yet are currently underutilized by testing professionals. In this article, we present a 10-step procedure to maximize the probability that a trace data modeling project will be successful: 1) grounding the project in psychometric theory, 2) building technical infrastructure to collect trace data, 3) designing a useful developmental validation study, 4) using a holdout validation approach with collected data, 5) using exploratory analysis to conduct meaningful feature engineering, 6) identifying useful machine learning algorithms to predict a thoughtfully chosen criterion, 7) engineering a machine learning model with meaningful internal cross-validation and hyperparameter selection, 8) conducting model diagnostics to assess if the resulting model is overfitted, underfitted, or within acceptable tolerance, and 9) testing the success of the final model in meeting conceptual, technical, and psychometric goals. If deemed successful, trace data model predictions could then be engineered into decision-making systems. We present this framework within the broader view of psychometrics, exploring the challenges of developing psychometrically valid models using such complex data with much weaker trait signals than assessment developers have typically attempted to model.","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2022-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Testing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/15305058.2022.2134394","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract Assessment trace data, such as mouse positions and their timing, offer interesting and provocative reflections of individual differences yet are currently underutilized by testing professionals. In this article, we present a 10-step procedure to maximize the probability that a trace data modeling project will be successful: 1) grounding the project in psychometric theory, 2) building technical infrastructure to collect trace data, 3) designing a useful developmental validation study, 4) using a holdout validation approach with collected data, 5) using exploratory analysis to conduct meaningful feature engineering, 6) identifying useful machine learning algorithms to predict a thoughtfully chosen criterion, 7) engineering a machine learning model with meaningful internal cross-validation and hyperparameter selection, 8) conducting model diagnostics to assess if the resulting model is overfitted, underfitted, or within acceptable tolerance, and 9) testing the success of the final model in meeting conceptual, technical, and psychometric goals. If deemed successful, trace data model predictions could then be engineered into decision-making systems. We present this framework within the broader view of psychometrics, exploring the challenges of developing psychometrically valid models using such complex data with much weaker trait signals than assessment developers have typically attempted to model.