Kenneth McClure, Brooke A Ammerman, Ross Jacobucci
{"title":"关于选择用于临床预测的项目分数或综合分数。","authors":"Kenneth McClure, Brooke A Ammerman, Ross Jacobucci","doi":"10.1080/00273171.2023.2292598","DOIUrl":null,"url":null,"abstract":"<p><p>Recent shifts to prioritize prediction, rather than explanation, in psychological science have increased applications of predictive modeling methods. However, composite predictors, such as sum scores, are still commonly used in practice. The motivations behind composite test scores are largely intertwined with reducing the influence of measurement error in answering explanatory questions. But this may be detrimental for predictive aims. The present paper examines the impact of utilizing composite or item-level predictors in linear regression. A mathematical examination of the bias-variance decomposition of prediction error in the presence of measurement error is provided. It is shown that prediction bias, which may be exacerbated by composite scoring, drives prediction error for linear regression. This may be particularly salient when composite scores are comprised of heterogeneous items such as in clinical scales where items correspond to symptoms. With sufficiently large training samples, the increased prediction variance associated with item scores becomes negligible even when composite scores are sufficient. Practical implications of predictor scoring are examined in an empirical example predicting suicidal ideation from various depression scales. Results show that item scores can markedly improve prediction particularly for symptom-based scales. Cross-validation methods can be used to empirically justify predictor scoring decisions.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"566-583"},"PeriodicalIF":5.3000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the Selection of Item Scores or Composite Scores for Clinical Prediction.\",\"authors\":\"Kenneth McClure, Brooke A Ammerman, Ross Jacobucci\",\"doi\":\"10.1080/00273171.2023.2292598\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recent shifts to prioritize prediction, rather than explanation, in psychological science have increased applications of predictive modeling methods. However, composite predictors, such as sum scores, are still commonly used in practice. The motivations behind composite test scores are largely intertwined with reducing the influence of measurement error in answering explanatory questions. But this may be detrimental for predictive aims. The present paper examines the impact of utilizing composite or item-level predictors in linear regression. A mathematical examination of the bias-variance decomposition of prediction error in the presence of measurement error is provided. It is shown that prediction bias, which may be exacerbated by composite scoring, drives prediction error for linear regression. This may be particularly salient when composite scores are comprised of heterogeneous items such as in clinical scales where items correspond to symptoms. With sufficiently large training samples, the increased prediction variance associated with item scores becomes negligible even when composite scores are sufficient. Practical implications of predictor scoring are examined in an empirical example predicting suicidal ideation from various depression scales. Results show that item scores can markedly improve prediction particularly for symptom-based scales. Cross-validation methods can be used to empirically justify predictor scoring decisions.</p>\",\"PeriodicalId\":53155,\"journal\":{\"name\":\"Multivariate Behavioral Research\",\"volume\":\" \",\"pages\":\"566-583\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Multivariate Behavioral Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1080/00273171.2023.2292598\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/2/27 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multivariate Behavioral Research","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1080/00273171.2023.2292598","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/2/27 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
On the Selection of Item Scores or Composite Scores for Clinical Prediction.
Recent shifts to prioritize prediction, rather than explanation, in psychological science have increased applications of predictive modeling methods. However, composite predictors, such as sum scores, are still commonly used in practice. The motivations behind composite test scores are largely intertwined with reducing the influence of measurement error in answering explanatory questions. But this may be detrimental for predictive aims. The present paper examines the impact of utilizing composite or item-level predictors in linear regression. A mathematical examination of the bias-variance decomposition of prediction error in the presence of measurement error is provided. It is shown that prediction bias, which may be exacerbated by composite scoring, drives prediction error for linear regression. This may be particularly salient when composite scores are comprised of heterogeneous items such as in clinical scales where items correspond to symptoms. With sufficiently large training samples, the increased prediction variance associated with item scores becomes negligible even when composite scores are sufficient. Practical implications of predictor scoring are examined in an empirical example predicting suicidal ideation from various depression scales. Results show that item scores can markedly improve prediction particularly for symptom-based scales. Cross-validation methods can be used to empirically justify predictor scoring decisions.
期刊介绍:
Multivariate Behavioral Research (MBR) publishes a variety of substantive, methodological, and theoretical articles in all areas of the social and behavioral sciences. Most MBR articles fall into one of two categories. Substantive articles report on applications of sophisticated multivariate research methods to study topics of substantive interest in personality, health, intelligence, industrial/organizational, and other behavioral science areas. Methodological articles present and/or evaluate new developments in multivariate methods, or address methodological issues in current research. We also encourage submission of integrative articles related to pedagogy involving multivariate research methods, and to historical treatments of interest and relevance to multivariate research methods.