Sam Henry , Dustin Wood , David M. Condon , Graham H. Lowman , René Mõttus
{"title":"利用多测评者和重复测试数据检测心理量表内部和之间的重叠情况","authors":"Sam Henry , Dustin Wood , David M. Condon , Graham H. Lowman , René Mõttus","doi":"10.1016/j.jrp.2024.104530","DOIUrl":null,"url":null,"abstract":"<div><p>Correlations estimated in single-source data provide uninterpretable estimates of empirical overlap between scales. We describe a model to adjust correlations for errors and biases using test–retest and multi-rater data and compare adjusted correlations among individual items with their human-rated semantic similarity (<em>SS</em>). We expected adjusted correlations to predict <em>SS</em> better than unadjusted correlations and exceed <em>SS</em> in absolute magnitude. While unadjusted and adjusted correlations predicted <em>SS</em> rankings equally well across all items, adjusted correlations were superior where items were judged most semantically redundant in meaning. Retest- and agreement-adjusted correlations were usually higher than <em>SS</em>, whereas unadjusted correlations often underestimated <em>SS</em>. We discuss uses of test–retest and multi-rater data for identifying construct redundancy and argue <em>SS</em> often underestimates variables’ empirical overlap.</p></div>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Using multi-rater and test-retest data to detect overlap within and between psychological scales\",\"authors\":\"Sam Henry , Dustin Wood , David M. Condon , Graham H. Lowman , René Mõttus\",\"doi\":\"10.1016/j.jrp.2024.104530\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Correlations estimated in single-source data provide uninterpretable estimates of empirical overlap between scales. We describe a model to adjust correlations for errors and biases using test–retest and multi-rater data and compare adjusted correlations among individual items with their human-rated semantic similarity (<em>SS</em>). We expected adjusted correlations to predict <em>SS</em> better than unadjusted correlations and exceed <em>SS</em> in absolute magnitude. While unadjusted and adjusted correlations predicted <em>SS</em> rankings equally well across all items, adjusted correlations were superior where items were judged most semantically redundant in meaning. Retest- and agreement-adjusted correlations were usually higher than <em>SS</em>, whereas unadjusted correlations often underestimated <em>SS</em>. We discuss uses of test–retest and multi-rater data for identifying construct redundancy and argue <em>SS</em> often underestimates variables’ empirical overlap.</p></div>\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0092656624000783\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0092656624000783","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0
摘要
单源数据中估算的相关性对量表之间的经验重叠提供了无法解释的估算。我们描述了一个利用测试-重测和多评定者数据调整相关性以消除误差和偏差的模型,并将调整后的单个项目间相关性与人类评定的语义相似性(SS)进行比较。我们期望调整后的相关性能比未调整的相关性更好地预测语义相似性,并在绝对值上超过语义相似性。虽然未经调整的相关性和调整后的相关性对所有项目的 SS 排名的预测效果相同,但调整后的相关性在项目被判定为语义冗余度最高时更胜一筹。重测和一致性调整相关通常高于 SS,而未调整相关往往低估了 SS。我们讨论了使用重测和多评定者数据来识别建构冗余的问题,并认为 SS 往往低估了变量的经验重叠性。
Using multi-rater and test-retest data to detect overlap within and between psychological scales
Correlations estimated in single-source data provide uninterpretable estimates of empirical overlap between scales. We describe a model to adjust correlations for errors and biases using test–retest and multi-rater data and compare adjusted correlations among individual items with their human-rated semantic similarity (SS). We expected adjusted correlations to predict SS better than unadjusted correlations and exceed SS in absolute magnitude. While unadjusted and adjusted correlations predicted SS rankings equally well across all items, adjusted correlations were superior where items were judged most semantically redundant in meaning. Retest- and agreement-adjusted correlations were usually higher than SS, whereas unadjusted correlations often underestimated SS. We discuss uses of test–retest and multi-rater data for identifying construct redundancy and argue SS often underestimates variables’ empirical overlap.