Niels Brinkman, Rick Looman, Prakash Jayakumar, David Ring, Seung Choi
{"title":"是否有可能开发出上限效应较低的患者报告体验测量方法?","authors":"Niels Brinkman, Rick Looman, Prakash Jayakumar, David Ring, Seung Choi","doi":"10.1097/CORR.0000000000003262","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Patient-reported experience measures (PREMs), such as the Jefferson Scale of Patient Perceptions of Physician Empathy (JSPPPE) or the Wake Forest Trust in Physician Scale (WTPS), have notable intercorrelation and ceiling effects (the proportion of observations with the highest possible score). Information is lost when high ceiling effects occur as there almost certainly is at least some variation among the patients with the highest score that the measurement tool was unable to measure. Efforts to identify and quantify factors associated with diminished patient experience can benefit from a PREM with more variability and a smaller proportion of highest possible scores (that is, a more limited ceiling effect) than occurs with currently available PREMs.</p><p><strong>Questions/purposes: </strong>In the first stage of a two-stage process, using a cohort of patients seeking musculoskeletal specialty care, we asked: (1) What groupings of items that address a similar aspect of patient experience are present among binary items directed at patient experience and derived from commonly used PREMs? (2) Can a small number of representative items provide a measure with potential for less of a ceiling effect (high item difficulty parameters)? In a second, independent cohort enrolled to assess whether the identified items perform consistently among different cohorts, we asked: (3) Does the new PREM perform differently in terms of item groupings (factor structure), and would different subsets of the included items provide the same measurement results (internal consistency) when items are measured using a 5-point rating scale instead of a binary scale? (4) What are the differences in survey properties (for example, ceiling effects) and correlation between the new PREM and commonly used PREMs?</p><p><strong>Methods: </strong>In two cross-sectional studies among patients seeking musculoskeletal specialty care conducted in 2022 and 2023, all English-speaking and English-reading adults (ages 18 to 89 years) without cognitive deficiency were invited to participate in two consecutive, separate cohorts to help develop (the initial, learning cohort) and internally validate (the second, validation cohort) a provisional new PREM. We identified 218 eligible patients for the initial learning cohort, of whom all completed all measures. Participants had a mean ± SD age of 55 ± 16 years, 60% (130) were women, 45% (99) had private insurance, and most sought care for lower extremity (56% [121]) and nontraumatic conditions (63% [137]). We measured 25 items derived from other commonly used PREMs that address aspects of patient experience in which patients reported whether they agreed or disagreed (binary) with certain statements about their clinician. We performed an exploratory factor analysis and confirmatory factor analysis (CFA) to identify groups of items that measure the same underlying construct related to patient experience. We then applied a two-parameter logistic model based on item response theory to identify the most discriminating items with the most variability (item difficulty) with the aim of reducing the ceiling effect. We also conducted a differential item functioning analysis to assess whether specific items are rated discordantly by specific subgroups of patients, which can introduce bias. We then enrolled 154 eligible patients, of whom 99% (153) completed all required measures, into a validation cohort with similar demographic characteristics. We changed the binary items to 5-point Likert scales to increase the potential for variation in an attempt to further reduce ceiling effects and repeated the CFA. We also measured internal consistency (using Cronbach alpha) and the correlation of the new PREM with other commonly used PREMs using bivariate analyses.</p><p><strong>Results: </strong>We identified three groupings of items in the learning cohort representing \"trust in clinician\" (13 items), \"relationship with clinician\" (7 items), and \"participation in shared decision-making\" (4 items). The \"trust in clinician\" factor performed best of all three factors and therefore was selected for subsequent analyses. We selected the best-performing items in terms of item difficulty to generate a 7-item short form. We found excellent CFA model fit (the 13-item and 7-item versions both had a root mean square error of approximation [RMSEA] of < 0.001), excellent internal consistency (Cronbach α was 0.94 for the 13-item version and 0.91 for the 7-item version), good item response theory parameters (item difficulty ranging between -0.37 and 0.16 for the 7-item version, with higher values indicating lower ceiling effect), no local dependencies, and no differential item functioning among any of the items. The other two factors were excluded from measure development due to low item response theory parameters (item difficulty ranging between -1.3 and -0.69, indicating higher ceiling effect), multiple local dependencies, and exhausting the number of items without being able to address these issues. The validation cohort confirmed adequate item selection and performance of both the 13-item and 7-item version of the Trust and Experience with Clinicians Scale (TRECS), with good to excellent CFA model fit (RMSEA 0.058 [TRECS-13]; RMSEA 0.016 [TRECS-7]), excellent internal consistency (Cronbach α = 0.96 [TRECS-13]; Cronbach α = 0.92 [TRECS-7]), no differential item functioning and limited ceiling effects (11% [TRECS-13]; 14% [TRECS-7]), and notable correlation with other PREMs such as the JSPPPE (ρ = 0.77) and WTPS (ρ = 0.74).</p><p><strong>Conclusion: </strong>A relatively brief 7-item measure of patient experience focused on trust can eliminate most of the ceiling effects common to PREMs with good psychometric properties. Future studies may externally validate the TRECS in other populations as well as provide population-based T-score conversion tables based on a larger sample size more representative of the population seeking musculoskeletal care.</p><p><strong>Clinical relevance: </strong>A PREM anchored in trust that reduces loss of information at the higher end of the scale can help individuals and institutions to assess experience more accurately, gauge the impact of interventions, and generate effective ways to learn and improve within a health system.</p>","PeriodicalId":10404,"journal":{"name":"Clinical Orthopaedics and Related Research®","volume":" ","pages":""},"PeriodicalIF":4.2000,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Is It Possible to Develop a Patient-reported Experience Measure With Lower Ceiling Effect?\",\"authors\":\"Niels Brinkman, Rick Looman, Prakash Jayakumar, David Ring, Seung Choi\",\"doi\":\"10.1097/CORR.0000000000003262\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Patient-reported experience measures (PREMs), such as the Jefferson Scale of Patient Perceptions of Physician Empathy (JSPPPE) or the Wake Forest Trust in Physician Scale (WTPS), have notable intercorrelation and ceiling effects (the proportion of observations with the highest possible score). Information is lost when high ceiling effects occur as there almost certainly is at least some variation among the patients with the highest score that the measurement tool was unable to measure. Efforts to identify and quantify factors associated with diminished patient experience can benefit from a PREM with more variability and a smaller proportion of highest possible scores (that is, a more limited ceiling effect) than occurs with currently available PREMs.</p><p><strong>Questions/purposes: </strong>In the first stage of a two-stage process, using a cohort of patients seeking musculoskeletal specialty care, we asked: (1) What groupings of items that address a similar aspect of patient experience are present among binary items directed at patient experience and derived from commonly used PREMs? (2) Can a small number of representative items provide a measure with potential for less of a ceiling effect (high item difficulty parameters)? In a second, independent cohort enrolled to assess whether the identified items perform consistently among different cohorts, we asked: (3) Does the new PREM perform differently in terms of item groupings (factor structure), and would different subsets of the included items provide the same measurement results (internal consistency) when items are measured using a 5-point rating scale instead of a binary scale? (4) What are the differences in survey properties (for example, ceiling effects) and correlation between the new PREM and commonly used PREMs?</p><p><strong>Methods: </strong>In two cross-sectional studies among patients seeking musculoskeletal specialty care conducted in 2022 and 2023, all English-speaking and English-reading adults (ages 18 to 89 years) without cognitive deficiency were invited to participate in two consecutive, separate cohorts to help develop (the initial, learning cohort) and internally validate (the second, validation cohort) a provisional new PREM. We identified 218 eligible patients for the initial learning cohort, of whom all completed all measures. Participants had a mean ± SD age of 55 ± 16 years, 60% (130) were women, 45% (99) had private insurance, and most sought care for lower extremity (56% [121]) and nontraumatic conditions (63% [137]). We measured 25 items derived from other commonly used PREMs that address aspects of patient experience in which patients reported whether they agreed or disagreed (binary) with certain statements about their clinician. We performed an exploratory factor analysis and confirmatory factor analysis (CFA) to identify groups of items that measure the same underlying construct related to patient experience. We then applied a two-parameter logistic model based on item response theory to identify the most discriminating items with the most variability (item difficulty) with the aim of reducing the ceiling effect. We also conducted a differential item functioning analysis to assess whether specific items are rated discordantly by specific subgroups of patients, which can introduce bias. We then enrolled 154 eligible patients, of whom 99% (153) completed all required measures, into a validation cohort with similar demographic characteristics. We changed the binary items to 5-point Likert scales to increase the potential for variation in an attempt to further reduce ceiling effects and repeated the CFA. We also measured internal consistency (using Cronbach alpha) and the correlation of the new PREM with other commonly used PREMs using bivariate analyses.</p><p><strong>Results: </strong>We identified three groupings of items in the learning cohort representing \\\"trust in clinician\\\" (13 items), \\\"relationship with clinician\\\" (7 items), and \\\"participation in shared decision-making\\\" (4 items). The \\\"trust in clinician\\\" factor performed best of all three factors and therefore was selected for subsequent analyses. We selected the best-performing items in terms of item difficulty to generate a 7-item short form. We found excellent CFA model fit (the 13-item and 7-item versions both had a root mean square error of approximation [RMSEA] of < 0.001), excellent internal consistency (Cronbach α was 0.94 for the 13-item version and 0.91 for the 7-item version), good item response theory parameters (item difficulty ranging between -0.37 and 0.16 for the 7-item version, with higher values indicating lower ceiling effect), no local dependencies, and no differential item functioning among any of the items. The other two factors were excluded from measure development due to low item response theory parameters (item difficulty ranging between -1.3 and -0.69, indicating higher ceiling effect), multiple local dependencies, and exhausting the number of items without being able to address these issues. The validation cohort confirmed adequate item selection and performance of both the 13-item and 7-item version of the Trust and Experience with Clinicians Scale (TRECS), with good to excellent CFA model fit (RMSEA 0.058 [TRECS-13]; RMSEA 0.016 [TRECS-7]), excellent internal consistency (Cronbach α = 0.96 [TRECS-13]; Cronbach α = 0.92 [TRECS-7]), no differential item functioning and limited ceiling effects (11% [TRECS-13]; 14% [TRECS-7]), and notable correlation with other PREMs such as the JSPPPE (ρ = 0.77) and WTPS (ρ = 0.74).</p><p><strong>Conclusion: </strong>A relatively brief 7-item measure of patient experience focused on trust can eliminate most of the ceiling effects common to PREMs with good psychometric properties. Future studies may externally validate the TRECS in other populations as well as provide population-based T-score conversion tables based on a larger sample size more representative of the population seeking musculoskeletal care.</p><p><strong>Clinical relevance: </strong>A PREM anchored in trust that reduces loss of information at the higher end of the scale can help individuals and institutions to assess experience more accurately, gauge the impact of interventions, and generate effective ways to learn and improve within a health system.</p>\",\"PeriodicalId\":10404,\"journal\":{\"name\":\"Clinical Orthopaedics and Related Research®\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Clinical Orthopaedics and Related Research®\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1097/CORR.0000000000003262\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ORTHOPEDICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Orthopaedics and Related Research®","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/CORR.0000000000003262","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
Is It Possible to Develop a Patient-reported Experience Measure With Lower Ceiling Effect?
Background: Patient-reported experience measures (PREMs), such as the Jefferson Scale of Patient Perceptions of Physician Empathy (JSPPPE) or the Wake Forest Trust in Physician Scale (WTPS), have notable intercorrelation and ceiling effects (the proportion of observations with the highest possible score). Information is lost when high ceiling effects occur as there almost certainly is at least some variation among the patients with the highest score that the measurement tool was unable to measure. Efforts to identify and quantify factors associated with diminished patient experience can benefit from a PREM with more variability and a smaller proportion of highest possible scores (that is, a more limited ceiling effect) than occurs with currently available PREMs.
Questions/purposes: In the first stage of a two-stage process, using a cohort of patients seeking musculoskeletal specialty care, we asked: (1) What groupings of items that address a similar aspect of patient experience are present among binary items directed at patient experience and derived from commonly used PREMs? (2) Can a small number of representative items provide a measure with potential for less of a ceiling effect (high item difficulty parameters)? In a second, independent cohort enrolled to assess whether the identified items perform consistently among different cohorts, we asked: (3) Does the new PREM perform differently in terms of item groupings (factor structure), and would different subsets of the included items provide the same measurement results (internal consistency) when items are measured using a 5-point rating scale instead of a binary scale? (4) What are the differences in survey properties (for example, ceiling effects) and correlation between the new PREM and commonly used PREMs?
Methods: In two cross-sectional studies among patients seeking musculoskeletal specialty care conducted in 2022 and 2023, all English-speaking and English-reading adults (ages 18 to 89 years) without cognitive deficiency were invited to participate in two consecutive, separate cohorts to help develop (the initial, learning cohort) and internally validate (the second, validation cohort) a provisional new PREM. We identified 218 eligible patients for the initial learning cohort, of whom all completed all measures. Participants had a mean ± SD age of 55 ± 16 years, 60% (130) were women, 45% (99) had private insurance, and most sought care for lower extremity (56% [121]) and nontraumatic conditions (63% [137]). We measured 25 items derived from other commonly used PREMs that address aspects of patient experience in which patients reported whether they agreed or disagreed (binary) with certain statements about their clinician. We performed an exploratory factor analysis and confirmatory factor analysis (CFA) to identify groups of items that measure the same underlying construct related to patient experience. We then applied a two-parameter logistic model based on item response theory to identify the most discriminating items with the most variability (item difficulty) with the aim of reducing the ceiling effect. We also conducted a differential item functioning analysis to assess whether specific items are rated discordantly by specific subgroups of patients, which can introduce bias. We then enrolled 154 eligible patients, of whom 99% (153) completed all required measures, into a validation cohort with similar demographic characteristics. We changed the binary items to 5-point Likert scales to increase the potential for variation in an attempt to further reduce ceiling effects and repeated the CFA. We also measured internal consistency (using Cronbach alpha) and the correlation of the new PREM with other commonly used PREMs using bivariate analyses.
Results: We identified three groupings of items in the learning cohort representing "trust in clinician" (13 items), "relationship with clinician" (7 items), and "participation in shared decision-making" (4 items). The "trust in clinician" factor performed best of all three factors and therefore was selected for subsequent analyses. We selected the best-performing items in terms of item difficulty to generate a 7-item short form. We found excellent CFA model fit (the 13-item and 7-item versions both had a root mean square error of approximation [RMSEA] of < 0.001), excellent internal consistency (Cronbach α was 0.94 for the 13-item version and 0.91 for the 7-item version), good item response theory parameters (item difficulty ranging between -0.37 and 0.16 for the 7-item version, with higher values indicating lower ceiling effect), no local dependencies, and no differential item functioning among any of the items. The other two factors were excluded from measure development due to low item response theory parameters (item difficulty ranging between -1.3 and -0.69, indicating higher ceiling effect), multiple local dependencies, and exhausting the number of items without being able to address these issues. The validation cohort confirmed adequate item selection and performance of both the 13-item and 7-item version of the Trust and Experience with Clinicians Scale (TRECS), with good to excellent CFA model fit (RMSEA 0.058 [TRECS-13]; RMSEA 0.016 [TRECS-7]), excellent internal consistency (Cronbach α = 0.96 [TRECS-13]; Cronbach α = 0.92 [TRECS-7]), no differential item functioning and limited ceiling effects (11% [TRECS-13]; 14% [TRECS-7]), and notable correlation with other PREMs such as the JSPPPE (ρ = 0.77) and WTPS (ρ = 0.74).
Conclusion: A relatively brief 7-item measure of patient experience focused on trust can eliminate most of the ceiling effects common to PREMs with good psychometric properties. Future studies may externally validate the TRECS in other populations as well as provide population-based T-score conversion tables based on a larger sample size more representative of the population seeking musculoskeletal care.
Clinical relevance: A PREM anchored in trust that reduces loss of information at the higher end of the scale can help individuals and institutions to assess experience more accurately, gauge the impact of interventions, and generate effective ways to learn and improve within a health system.
期刊介绍:
Clinical Orthopaedics and Related Research® is a leading peer-reviewed journal devoted to the dissemination of new and important orthopaedic knowledge.
CORR® brings readers the latest clinical and basic research, along with columns, commentaries, and interviews with authors.