Hojjat Salmasian, Astrid Van Wilder, Michelle Frits, Christine Iannaccone, Merranda Logan, Jonathan P Zebrowski, David Shahian, Mitchell Rein, David Levine, David W Bates
{"title":"Patient Safety Metrics Monitoring Across Harvard-Affiliated Hospitals: A Mixed Methods Study.","authors":"Hojjat Salmasian, Astrid Van Wilder, Michelle Frits, Christine Iannaccone, Merranda Logan, Jonathan P Zebrowski, David Shahian, Mitchell Rein, David Levine, David W Bates","doi":"10.1016/j.jcjq.2025.05.001","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The past two decades have seen a surge in available patient safety metrics. However, the variability in how health care organizations choose and monitor these metrics remains unknown.</p><p><strong>Methods: </strong>The authors cataloged the metrics organizations chose and how actively they monitored them. Factors influencing the monitoring of patient safety metrics were investigated using surveys and in-depth interviews with patient safety experts from 11 Harvard-affiliated organizations.</p><p><strong>Results: </strong>Eighty-four individuals across 11 sites helped complete the surveys, with a mean of 2.5 representatives from each site interviewed. Significant variability in active monitoring of safety metrics was observed across different sites. Overall, 108 measures were monitored by at least 1 site. Agreement between sites about the choice of measures was weak (κ = 0.40, 95% confidence interval [CI] 0.37-0.43), ranging from κ = 0.13 (95% CI 0.07-0.20) for maternal safety measures to κ = 0.86 (95% CI 0.69-1.00) for measures of hospital-acquired infections. Although not all 23 mandatory measures were monitored across all sites, these had the highest likelihood of active monitoring. A substantial overlap existed in measures targeting the same safety event but with slight differences in definitions, limiting the comparability of rates across institutions. Key considerations for active monitoring included the perceived measure usefulness and measurement burden, although external mandates or internal institutional commitments were stronger motivators overall. Other contributors included access to analytics teams and platforms, registry participation, vendor investments, and strategic or leadership interests.</p><p><strong>Conclusion: </strong>This study offers critical guidance to health policymakers on designing and mandating safety metrics. Despite high variability in metric selection, health care organizations share common themes when deciding what to actively measure.</p>","PeriodicalId":14835,"journal":{"name":"Joint Commission journal on quality and patient safety","volume":" ","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Joint Commission journal on quality and patient safety","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.jcjq.2025.05.001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Background: The past two decades have seen a surge in available patient safety metrics. However, the variability in how health care organizations choose and monitor these metrics remains unknown.
Methods: The authors cataloged the metrics organizations chose and how actively they monitored them. Factors influencing the monitoring of patient safety metrics were investigated using surveys and in-depth interviews with patient safety experts from 11 Harvard-affiliated organizations.
Results: Eighty-four individuals across 11 sites helped complete the surveys, with a mean of 2.5 representatives from each site interviewed. Significant variability in active monitoring of safety metrics was observed across different sites. Overall, 108 measures were monitored by at least 1 site. Agreement between sites about the choice of measures was weak (κ = 0.40, 95% confidence interval [CI] 0.37-0.43), ranging from κ = 0.13 (95% CI 0.07-0.20) for maternal safety measures to κ = 0.86 (95% CI 0.69-1.00) for measures of hospital-acquired infections. Although not all 23 mandatory measures were monitored across all sites, these had the highest likelihood of active monitoring. A substantial overlap existed in measures targeting the same safety event but with slight differences in definitions, limiting the comparability of rates across institutions. Key considerations for active monitoring included the perceived measure usefulness and measurement burden, although external mandates or internal institutional commitments were stronger motivators overall. Other contributors included access to analytics teams and platforms, registry participation, vendor investments, and strategic or leadership interests.
Conclusion: This study offers critical guidance to health policymakers on designing and mandating safety metrics. Despite high variability in metric selection, health care organizations share common themes when deciding what to actively measure.