Derek W Craig, Jocelyn Hunyadi, Timothy J Walker, Lauren Workman, Maria McClam, Andrea Lamont, Joe R Padilla, Pamela Diamond, Abraham Wandersman, Maria E Fernandez
{"title":"运用一种新颖的多方法评估组织准备研究中自我报告调查数据的质量。","authors":"Derek W Craig, Jocelyn Hunyadi, Timothy J Walker, Lauren Workman, Maria McClam, Andrea Lamont, Joe R Padilla, Pamela Diamond, Abraham Wandersman, Maria E Fernandez","doi":"10.1186/s43058-025-00751-8","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Self-report measures are essential in implementation science since many phenomena are difficult to assess directly. Nevertheless, cognitively demanding surveys increase the prevalence of careless and inattentive responses. Assessing response quality is critical to improving data validity, yet recommendations for determining response quality vary. To address this, we aimed to 1) apply a multi-method approach to assess the quality of self-report survey data in a study aimed to validate a measure of organizational readiness, 2) compare readiness scores among responses categorized as high- and low-quality, and 3) examine individual characteristics associated with low-quality responses.</p><p><strong>Methods: </strong>We surveyed federally qualified health center staff to assess organizational readiness for implementing evidence-based interventions to increase colorectal cancer screening. The survey was informed by the R = MC<sup>2</sup> heuristic, which proposes that readiness consists of three components: Motivation (M), Innovation-Specific Capacity (ISC), and General Capacity (GC). We determined response quality (high/low) using two assessment methods: survey completion time and monotonic response patterns (MRPs). T-tests examined associations between readiness scores and response quality, and regression models examined differences in response quality by individual characteristics (e.g., age, role, implementation involvement).</p><p><strong>Results: </strong>The sample consisted of 474 responses from 57 clinics. The average survey time was 24.3 min, and 42 respondents (8.9%) had MRPs on all readiness components. The number of low-quality responses varied by assessment method (range = 42-98). Survey responses classified as low quality had higher readiness scores (M, ISC, GC, p < 0.01). Age (p = 0.01), race (p < 0.01), and implementation involvement (p = 0.04) were inversely associated with survey completion time, whereas older age (p = 0.01) and more years worked at the clinic (p = 0.03) were associated with higher response quality. Quality improvement staff and clinic management were less likely to provide low-quality responses (p = 0.04).</p><p><strong>Conclusions: </strong>Our findings suggest that readiness scores can be inflated by low-quality responses, and individual characteristics play a significant role in data quality. There is a need to be aware of who is completing surveys and the context in which surveys are distributed to improve the interpretation of findings and make the measurement of implementation-related constructs more precise.</p>","PeriodicalId":73355,"journal":{"name":"Implementation science communications","volume":"6 1","pages":"63"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12102923/pdf/","citationCount":"0","resultStr":"{\"title\":\"Using a novel, multi-method approach to evaluate the quality of self-report survey data in organizational readiness research.\",\"authors\":\"Derek W Craig, Jocelyn Hunyadi, Timothy J Walker, Lauren Workman, Maria McClam, Andrea Lamont, Joe R Padilla, Pamela Diamond, Abraham Wandersman, Maria E Fernandez\",\"doi\":\"10.1186/s43058-025-00751-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Self-report measures are essential in implementation science since many phenomena are difficult to assess directly. Nevertheless, cognitively demanding surveys increase the prevalence of careless and inattentive responses. Assessing response quality is critical to improving data validity, yet recommendations for determining response quality vary. To address this, we aimed to 1) apply a multi-method approach to assess the quality of self-report survey data in a study aimed to validate a measure of organizational readiness, 2) compare readiness scores among responses categorized as high- and low-quality, and 3) examine individual characteristics associated with low-quality responses.</p><p><strong>Methods: </strong>We surveyed federally qualified health center staff to assess organizational readiness for implementing evidence-based interventions to increase colorectal cancer screening. The survey was informed by the R = MC<sup>2</sup> heuristic, which proposes that readiness consists of three components: Motivation (M), Innovation-Specific Capacity (ISC), and General Capacity (GC). We determined response quality (high/low) using two assessment methods: survey completion time and monotonic response patterns (MRPs). T-tests examined associations between readiness scores and response quality, and regression models examined differences in response quality by individual characteristics (e.g., age, role, implementation involvement).</p><p><strong>Results: </strong>The sample consisted of 474 responses from 57 clinics. The average survey time was 24.3 min, and 42 respondents (8.9%) had MRPs on all readiness components. The number of low-quality responses varied by assessment method (range = 42-98). Survey responses classified as low quality had higher readiness scores (M, ISC, GC, p < 0.01). Age (p = 0.01), race (p < 0.01), and implementation involvement (p = 0.04) were inversely associated with survey completion time, whereas older age (p = 0.01) and more years worked at the clinic (p = 0.03) were associated with higher response quality. Quality improvement staff and clinic management were less likely to provide low-quality responses (p = 0.04).</p><p><strong>Conclusions: </strong>Our findings suggest that readiness scores can be inflated by low-quality responses, and individual characteristics play a significant role in data quality. There is a need to be aware of who is completing surveys and the context in which surveys are distributed to improve the interpretation of findings and make the measurement of implementation-related constructs more precise.</p>\",\"PeriodicalId\":73355,\"journal\":{\"name\":\"Implementation science communications\",\"volume\":\"6 1\",\"pages\":\"63\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12102923/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Implementation science communications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s43058-025-00751-8\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Implementation science communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s43058-025-00751-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Using a novel, multi-method approach to evaluate the quality of self-report survey data in organizational readiness research.
Background: Self-report measures are essential in implementation science since many phenomena are difficult to assess directly. Nevertheless, cognitively demanding surveys increase the prevalence of careless and inattentive responses. Assessing response quality is critical to improving data validity, yet recommendations for determining response quality vary. To address this, we aimed to 1) apply a multi-method approach to assess the quality of self-report survey data in a study aimed to validate a measure of organizational readiness, 2) compare readiness scores among responses categorized as high- and low-quality, and 3) examine individual characteristics associated with low-quality responses.
Methods: We surveyed federally qualified health center staff to assess organizational readiness for implementing evidence-based interventions to increase colorectal cancer screening. The survey was informed by the R = MC2 heuristic, which proposes that readiness consists of three components: Motivation (M), Innovation-Specific Capacity (ISC), and General Capacity (GC). We determined response quality (high/low) using two assessment methods: survey completion time and monotonic response patterns (MRPs). T-tests examined associations between readiness scores and response quality, and regression models examined differences in response quality by individual characteristics (e.g., age, role, implementation involvement).
Results: The sample consisted of 474 responses from 57 clinics. The average survey time was 24.3 min, and 42 respondents (8.9%) had MRPs on all readiness components. The number of low-quality responses varied by assessment method (range = 42-98). Survey responses classified as low quality had higher readiness scores (M, ISC, GC, p < 0.01). Age (p = 0.01), race (p < 0.01), and implementation involvement (p = 0.04) were inversely associated with survey completion time, whereas older age (p = 0.01) and more years worked at the clinic (p = 0.03) were associated with higher response quality. Quality improvement staff and clinic management were less likely to provide low-quality responses (p = 0.04).
Conclusions: Our findings suggest that readiness scores can be inflated by low-quality responses, and individual characteristics play a significant role in data quality. There is a need to be aware of who is completing surveys and the context in which surveys are distributed to improve the interpretation of findings and make the measurement of implementation-related constructs more precise.