关于选择用于临床预测的项目分数或综合分数。

IF 5.3 3区 心理学 Q1 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS
Multivariate Behavioral Research Pub Date : 2024-05-01 Epub Date: 2024-02-27 DOI:10.1080/00273171.2023.2292598
Kenneth McClure, Brooke A Ammerman, Ross Jacobucci
{"title":"关于选择用于临床预测的项目分数或综合分数。","authors":"Kenneth McClure, Brooke A Ammerman, Ross Jacobucci","doi":"10.1080/00273171.2023.2292598","DOIUrl":null,"url":null,"abstract":"<p><p>Recent shifts to prioritize prediction, rather than explanation, in psychological science have increased applications of predictive modeling methods. However, composite predictors, such as sum scores, are still commonly used in practice. The motivations behind composite test scores are largely intertwined with reducing the influence of measurement error in answering explanatory questions. But this may be detrimental for predictive aims. The present paper examines the impact of utilizing composite or item-level predictors in linear regression. A mathematical examination of the bias-variance decomposition of prediction error in the presence of measurement error is provided. It is shown that prediction bias, which may be exacerbated by composite scoring, drives prediction error for linear regression. This may be particularly salient when composite scores are comprised of heterogeneous items such as in clinical scales where items correspond to symptoms. With sufficiently large training samples, the increased prediction variance associated with item scores becomes negligible even when composite scores are sufficient. Practical implications of predictor scoring are examined in an empirical example predicting suicidal ideation from various depression scales. Results show that item scores can markedly improve prediction particularly for symptom-based scales. Cross-validation methods can be used to empirically justify predictor scoring decisions.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"566-583"},"PeriodicalIF":5.3000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the Selection of Item Scores or Composite Scores for Clinical Prediction.\",\"authors\":\"Kenneth McClure, Brooke A Ammerman, Ross Jacobucci\",\"doi\":\"10.1080/00273171.2023.2292598\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recent shifts to prioritize prediction, rather than explanation, in psychological science have increased applications of predictive modeling methods. However, composite predictors, such as sum scores, are still commonly used in practice. The motivations behind composite test scores are largely intertwined with reducing the influence of measurement error in answering explanatory questions. But this may be detrimental for predictive aims. The present paper examines the impact of utilizing composite or item-level predictors in linear regression. A mathematical examination of the bias-variance decomposition of prediction error in the presence of measurement error is provided. It is shown that prediction bias, which may be exacerbated by composite scoring, drives prediction error for linear regression. This may be particularly salient when composite scores are comprised of heterogeneous items such as in clinical scales where items correspond to symptoms. With sufficiently large training samples, the increased prediction variance associated with item scores becomes negligible even when composite scores are sufficient. Practical implications of predictor scoring are examined in an empirical example predicting suicidal ideation from various depression scales. Results show that item scores can markedly improve prediction particularly for symptom-based scales. Cross-validation methods can be used to empirically justify predictor scoring decisions.</p>\",\"PeriodicalId\":53155,\"journal\":{\"name\":\"Multivariate Behavioral Research\",\"volume\":\" \",\"pages\":\"566-583\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Multivariate Behavioral Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1080/00273171.2023.2292598\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/2/27 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multivariate Behavioral Research","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1080/00273171.2023.2292598","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/2/27 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

近来,心理科学中预测而非解释的优先顺序发生了转变,从而增加了预测建模方法的应用。然而,综合预测指标,如总分,在实践中仍被普遍使用。综合测试分数背后的动机主要是在回答解释性问题时减少测量误差的影响。但这可能不利于预测目标的实现。本文研究了在线性回归中使用综合或项目级预测因子的影响。本文对存在测量误差时预测误差的偏差-方差分解进行了数学分析。结果表明,预测偏差(综合评分可能会加剧这种偏差)会导致线性回归的预测误差。当综合评分由异质项目组成时,这一点可能尤为突出,例如在临床量表中,项目与症状相对应。有了足够大的训练样本,即使综合评分足够多,与项目评分相关的预测方差增加也变得微不足道。在一个通过各种抑郁量表预测自杀意念的实证例子中,研究了预测评分的实际意义。结果表明,项目得分可以明显改善预测效果,尤其是基于症状的量表。交叉验证方法可用于从经验上证明预测计分决策的合理性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
On the Selection of Item Scores or Composite Scores for Clinical Prediction.

Recent shifts to prioritize prediction, rather than explanation, in psychological science have increased applications of predictive modeling methods. However, composite predictors, such as sum scores, are still commonly used in practice. The motivations behind composite test scores are largely intertwined with reducing the influence of measurement error in answering explanatory questions. But this may be detrimental for predictive aims. The present paper examines the impact of utilizing composite or item-level predictors in linear regression. A mathematical examination of the bias-variance decomposition of prediction error in the presence of measurement error is provided. It is shown that prediction bias, which may be exacerbated by composite scoring, drives prediction error for linear regression. This may be particularly salient when composite scores are comprised of heterogeneous items such as in clinical scales where items correspond to symptoms. With sufficiently large training samples, the increased prediction variance associated with item scores becomes negligible even when composite scores are sufficient. Practical implications of predictor scoring are examined in an empirical example predicting suicidal ideation from various depression scales. Results show that item scores can markedly improve prediction particularly for symptom-based scales. Cross-validation methods can be used to empirically justify predictor scoring decisions.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Multivariate Behavioral Research
Multivariate Behavioral Research 数学-数学跨学科应用
CiteScore
7.60
自引率
2.60%
发文量
49
审稿时长
>12 weeks
期刊介绍: Multivariate Behavioral Research (MBR) publishes a variety of substantive, methodological, and theoretical articles in all areas of the social and behavioral sciences. Most MBR articles fall into one of two categories. Substantive articles report on applications of sophisticated multivariate research methods to study topics of substantive interest in personality, health, intelligence, industrial/organizational, and other behavioral science areas. Methodological articles present and/or evaluate new developments in multivariate methods, or address methodological issues in current research. We also encourage submission of integrative articles related to pedagogy involving multivariate research methods, and to historical treatments of interest and relevance to multivariate research methods.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信