运用一种新颖的多方法评估组织准备研究中自我报告调查数据的质量。

Derek W Craig, Jocelyn Hunyadi, Timothy J Walker, Lauren Workman, Maria McClam, Andrea Lamont, Joe R Padilla, Pamela Diamond, Abraham Wandersman, Maria E Fernandez
{"title":"运用一种新颖的多方法评估组织准备研究中自我报告调查数据的质量。","authors":"Derek W Craig, Jocelyn Hunyadi, Timothy J Walker, Lauren Workman, Maria McClam, Andrea Lamont, Joe R Padilla, Pamela Diamond, Abraham Wandersman, Maria E Fernandez","doi":"10.1186/s43058-025-00751-8","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Self-report measures are essential in implementation science since many phenomena are difficult to assess directly. Nevertheless, cognitively demanding surveys increase the prevalence of careless and inattentive responses. Assessing response quality is critical to improving data validity, yet recommendations for determining response quality vary. To address this, we aimed to 1) apply a multi-method approach to assess the quality of self-report survey data in a study aimed to validate a measure of organizational readiness, 2) compare readiness scores among responses categorized as high- and low-quality, and 3) examine individual characteristics associated with low-quality responses.</p><p><strong>Methods: </strong>We surveyed federally qualified health center staff to assess organizational readiness for implementing evidence-based interventions to increase colorectal cancer screening. The survey was informed by the R = MC<sup>2</sup> heuristic, which proposes that readiness consists of three components: Motivation (M), Innovation-Specific Capacity (ISC), and General Capacity (GC). We determined response quality (high/low) using two assessment methods: survey completion time and monotonic response patterns (MRPs). T-tests examined associations between readiness scores and response quality, and regression models examined differences in response quality by individual characteristics (e.g., age, role, implementation involvement).</p><p><strong>Results: </strong>The sample consisted of 474 responses from 57 clinics. The average survey time was 24.3 min, and 42 respondents (8.9%) had MRPs on all readiness components. The number of low-quality responses varied by assessment method (range = 42-98). Survey responses classified as low quality had higher readiness scores (M, ISC, GC, p < 0.01). Age (p = 0.01), race (p < 0.01), and implementation involvement (p = 0.04) were inversely associated with survey completion time, whereas older age (p = 0.01) and more years worked at the clinic (p = 0.03) were associated with higher response quality. Quality improvement staff and clinic management were less likely to provide low-quality responses (p = 0.04).</p><p><strong>Conclusions: </strong>Our findings suggest that readiness scores can be inflated by low-quality responses, and individual characteristics play a significant role in data quality. There is a need to be aware of who is completing surveys and the context in which surveys are distributed to improve the interpretation of findings and make the measurement of implementation-related constructs more precise.</p>","PeriodicalId":73355,"journal":{"name":"Implementation science communications","volume":"6 1","pages":"63"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12102923/pdf/","citationCount":"0","resultStr":"{\"title\":\"Using a novel, multi-method approach to evaluate the quality of self-report survey data in organizational readiness research.\",\"authors\":\"Derek W Craig, Jocelyn Hunyadi, Timothy J Walker, Lauren Workman, Maria McClam, Andrea Lamont, Joe R Padilla, Pamela Diamond, Abraham Wandersman, Maria E Fernandez\",\"doi\":\"10.1186/s43058-025-00751-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Self-report measures are essential in implementation science since many phenomena are difficult to assess directly. Nevertheless, cognitively demanding surveys increase the prevalence of careless and inattentive responses. Assessing response quality is critical to improving data validity, yet recommendations for determining response quality vary. To address this, we aimed to 1) apply a multi-method approach to assess the quality of self-report survey data in a study aimed to validate a measure of organizational readiness, 2) compare readiness scores among responses categorized as high- and low-quality, and 3) examine individual characteristics associated with low-quality responses.</p><p><strong>Methods: </strong>We surveyed federally qualified health center staff to assess organizational readiness for implementing evidence-based interventions to increase colorectal cancer screening. The survey was informed by the R = MC<sup>2</sup> heuristic, which proposes that readiness consists of three components: Motivation (M), Innovation-Specific Capacity (ISC), and General Capacity (GC). We determined response quality (high/low) using two assessment methods: survey completion time and monotonic response patterns (MRPs). T-tests examined associations between readiness scores and response quality, and regression models examined differences in response quality by individual characteristics (e.g., age, role, implementation involvement).</p><p><strong>Results: </strong>The sample consisted of 474 responses from 57 clinics. The average survey time was 24.3 min, and 42 respondents (8.9%) had MRPs on all readiness components. The number of low-quality responses varied by assessment method (range = 42-98). Survey responses classified as low quality had higher readiness scores (M, ISC, GC, p < 0.01). Age (p = 0.01), race (p < 0.01), and implementation involvement (p = 0.04) were inversely associated with survey completion time, whereas older age (p = 0.01) and more years worked at the clinic (p = 0.03) were associated with higher response quality. Quality improvement staff and clinic management were less likely to provide low-quality responses (p = 0.04).</p><p><strong>Conclusions: </strong>Our findings suggest that readiness scores can be inflated by low-quality responses, and individual characteristics play a significant role in data quality. There is a need to be aware of who is completing surveys and the context in which surveys are distributed to improve the interpretation of findings and make the measurement of implementation-related constructs more precise.</p>\",\"PeriodicalId\":73355,\"journal\":{\"name\":\"Implementation science communications\",\"volume\":\"6 1\",\"pages\":\"63\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12102923/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Implementation science communications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s43058-025-00751-8\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Implementation science communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s43058-025-00751-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景:由于许多现象难以直接评估,自我报告措施在实施科学中是必不可少的。然而,认知要求的调查增加了粗心和不注意的反应的流行。评估响应质量对于提高数据有效性至关重要,但确定响应质量的建议各不相同。为了解决这个问题,我们的目标是:1)在一项旨在验证组织准备程度的研究中,应用多方法方法来评估自我报告调查数据的质量;2)比较高质量和低质量回答之间的准备程度得分;3)检查与低质量回答相关的个人特征。方法:我们调查了联邦合格的卫生中心工作人员,以评估组织实施循证干预措施以增加结直肠癌筛查的准备情况。该调查采用R = MC2启发式,该启发式提出准备由三个组成部分组成:动机(M)、创新特定能力(ISC)和一般能力(GC)。我们使用两种评估方法来确定响应质量(高/低):调查完成时间和单调响应模式(MRPs)。t检验检验了准备程度得分和反应质量之间的关联,回归模型检验了个体特征(如年龄、角色、实施参与)在反应质量方面的差异。结果:样本包括来自57家诊所的474份回复。平均调查时间为24.3分钟,42名受访者(8.9%)对所有准备就绪组件都有mrp。低质量反应的数量因评估方法而异(范围= 42-98)。被分类为低质量的调查回答有较高的准备得分(M, ISC, GC, p)。结论:我们的研究结果表明,准备得分可能被低质量的回答夸大,个人特征在数据质量中起着重要作用。有必要了解谁在完成调查以及在何种背景下进行调查,以改进对调查结果的解释,并使与执行有关的构想的衡量更加精确。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Using a novel, multi-method approach to evaluate the quality of self-report survey data in organizational readiness research.

Background: Self-report measures are essential in implementation science since many phenomena are difficult to assess directly. Nevertheless, cognitively demanding surveys increase the prevalence of careless and inattentive responses. Assessing response quality is critical to improving data validity, yet recommendations for determining response quality vary. To address this, we aimed to 1) apply a multi-method approach to assess the quality of self-report survey data in a study aimed to validate a measure of organizational readiness, 2) compare readiness scores among responses categorized as high- and low-quality, and 3) examine individual characteristics associated with low-quality responses.

Methods: We surveyed federally qualified health center staff to assess organizational readiness for implementing evidence-based interventions to increase colorectal cancer screening. The survey was informed by the R = MC2 heuristic, which proposes that readiness consists of three components: Motivation (M), Innovation-Specific Capacity (ISC), and General Capacity (GC). We determined response quality (high/low) using two assessment methods: survey completion time and monotonic response patterns (MRPs). T-tests examined associations between readiness scores and response quality, and regression models examined differences in response quality by individual characteristics (e.g., age, role, implementation involvement).

Results: The sample consisted of 474 responses from 57 clinics. The average survey time was 24.3 min, and 42 respondents (8.9%) had MRPs on all readiness components. The number of low-quality responses varied by assessment method (range = 42-98). Survey responses classified as low quality had higher readiness scores (M, ISC, GC, p < 0.01). Age (p = 0.01), race (p < 0.01), and implementation involvement (p = 0.04) were inversely associated with survey completion time, whereas older age (p = 0.01) and more years worked at the clinic (p = 0.03) were associated with higher response quality. Quality improvement staff and clinic management were less likely to provide low-quality responses (p = 0.04).

Conclusions: Our findings suggest that readiness scores can be inflated by low-quality responses, and individual characteristics play a significant role in data quality. There is a need to be aware of who is completing surveys and the context in which surveys are distributed to improve the interpretation of findings and make the measurement of implementation-related constructs more precise.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.20
自引率
0.00%
发文量
0
审稿时长
24 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信