为总评分量表选择具有相等间隔的响应锚点。

W. C. Casper, Bryan D. Edwards, J. C. Wallace, R. Landis, Dustin A. Fife
{"title":"为总评分量表选择具有相等间隔的响应锚点。","authors":"W. C. Casper, Bryan D. Edwards, J. C. Wallace, R. Landis, Dustin A. Fife","doi":"10.1037/apl0000444","DOIUrl":null,"url":null,"abstract":"Summated rating scales are ubiquitous in organizational research, and there are well-delineated guidelines for scale development (e.g., Hinkin, 1998). Nevertheless, there has been less research on the explicit selection of the response anchors. Constructing survey questions with equal-interval properties (i.e., interval or ratio data) is important if researchers plan to analyze their data using parametric statistics. As such, the primary objectives of the current study were to (a) determine the most common contexts in which summated rating scales are used (e.g., agreement, similarity, frequency, amount, and judgment), (b) determine the most commonly used anchors (e.g., strongly disagree, often, very good), and (c) provide empirical data on the conceptual distance between these anchors. We present the mean and standard deviation of scores for estimates of each anchor and the percentage of distribution overlap between the anchors. Our results provide researchers with data that can be used to guide the selection of verbal anchors with equal-interval properties so as to reduce measurement error and improve confidence in the results of subsequent analyses. We also conducted multiple empirical studies to examine the consequences of measuring constructs with unequal-interval anchors. A clear pattern of results is that correlations involving unequal-interval anchors are consistently weaker than correlations involving equal-interval anchors. (PsycINFO Database Record (c) 2019 APA, all rights reserved).","PeriodicalId":169654,"journal":{"name":"The Journal of applied psychology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":"{\"title\":\"Selecting response anchors with equal intervals for summated rating scales.\",\"authors\":\"W. C. Casper, Bryan D. Edwards, J. C. Wallace, R. Landis, Dustin A. Fife\",\"doi\":\"10.1037/apl0000444\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Summated rating scales are ubiquitous in organizational research, and there are well-delineated guidelines for scale development (e.g., Hinkin, 1998). Nevertheless, there has been less research on the explicit selection of the response anchors. Constructing survey questions with equal-interval properties (i.e., interval or ratio data) is important if researchers plan to analyze their data using parametric statistics. As such, the primary objectives of the current study were to (a) determine the most common contexts in which summated rating scales are used (e.g., agreement, similarity, frequency, amount, and judgment), (b) determine the most commonly used anchors (e.g., strongly disagree, often, very good), and (c) provide empirical data on the conceptual distance between these anchors. We present the mean and standard deviation of scores for estimates of each anchor and the percentage of distribution overlap between the anchors. Our results provide researchers with data that can be used to guide the selection of verbal anchors with equal-interval properties so as to reduce measurement error and improve confidence in the results of subsequent analyses. We also conducted multiple empirical studies to examine the consequences of measuring constructs with unequal-interval anchors. A clear pattern of results is that correlations involving unequal-interval anchors are consistently weaker than correlations involving equal-interval anchors. (PsycINFO Database Record (c) 2019 APA, all rights reserved).\",\"PeriodicalId\":169654,\"journal\":{\"name\":\"The Journal of applied psychology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"20\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Journal of applied psychology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1037/apl0000444\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of applied psychology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1037/apl0000444","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20

摘要

综合评定量表在组织研究中无处不在,并且有很好描述的量表开发指南(例如,Hinkin, 1998)。然而,关于反应锚的显式选择的研究较少。如果研究人员计划使用参数统计分析数据,那么构建具有等间隔属性(即间隔或比率数据)的调查问题是很重要的。因此,本研究的主要目标是(a)确定使用综合评分量表的最常见背景(例如,一致性,相似性,频率,数量和判断),(b)确定最常用的锚点(例如,强烈不同意,经常,非常好),以及(c)提供关于这些锚点之间概念距离的经验数据。我们给出了每个锚点的估计分数的平均值和标准差,以及锚点之间分布重叠的百分比。本研究结果为研究人员提供了数据,可用于指导等间隔属性言语锚点的选择,从而减少测量误差,提高后续分析结果的可信度。我们还进行了多项实证研究,以检验使用不等间隔锚点测量结构的后果。结果的一个清晰模式是,涉及不等间隔锚点的相关性始终弱于涉及等间隔锚点的相关性。(PsycINFO数据库记录(c) 2019 APA,版权所有)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Selecting response anchors with equal intervals for summated rating scales.
Summated rating scales are ubiquitous in organizational research, and there are well-delineated guidelines for scale development (e.g., Hinkin, 1998). Nevertheless, there has been less research on the explicit selection of the response anchors. Constructing survey questions with equal-interval properties (i.e., interval or ratio data) is important if researchers plan to analyze their data using parametric statistics. As such, the primary objectives of the current study were to (a) determine the most common contexts in which summated rating scales are used (e.g., agreement, similarity, frequency, amount, and judgment), (b) determine the most commonly used anchors (e.g., strongly disagree, often, very good), and (c) provide empirical data on the conceptual distance between these anchors. We present the mean and standard deviation of scores for estimates of each anchor and the percentage of distribution overlap between the anchors. Our results provide researchers with data that can be used to guide the selection of verbal anchors with equal-interval properties so as to reduce measurement error and improve confidence in the results of subsequent analyses. We also conducted multiple empirical studies to examine the consequences of measuring constructs with unequal-interval anchors. A clear pattern of results is that correlations involving unequal-interval anchors are consistently weaker than correlations involving equal-interval anchors. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信