Elleen Kim, Ghada Issa, Alyssa Berube, Wendy J Smith, Alison B Chambers, Erika D Petrin, Michael D Beland, Grayson L Baird
{"title":"Is there a sonographer effect? Sonographer as a source of variability for Shear Wave Elastography.","authors":"Elleen Kim, Ghada Issa, Alyssa Berube, Wendy J Smith, Alison B Chambers, Erika D Petrin, Michael D Beland, Grayson L Baird","doi":"10.11152/mu-4521","DOIUrl":null,"url":null,"abstract":"<p><strong>Aims: </strong>This study aims to estimate the degree of sonographers as a source of systematic variance for Shear Wave Elastography (SWE) values.</p><p><strong>Materials and methods: </strong>Two studies estimated variance in SWE measurements: 1) within-subjects and between-sonographer differences, and 2) between-sonographer differences alone. Both used a block design with six trained sonographers scanning six healthy liver volunteers using the same machine. Following training, each sonographer obtained ten SWE measurements from the right liver lobe for each volunteer per manufacturer guidelines.</p><p><strong>Results: </strong>When patients were scanned on different days, intraclass correlation coefficient (ICC)=0.23 was achieved, and when scanned on the same day, ICC=0.83, indicating that 17% of the variability was due to differences between sonographers. This 17% inter-sonographer variability translated into statistical and potentially clinically significant differences between sonographers-one sonographer had a SWE value of (4.99) and another (5.43), p<0.01, almost passing a clinical threshold.</p><p><strong>Conclusion: </strong>SWE values are influenced by a sonographer effect, highlighting the need to standardize protocols to minimize systematic variability between sonographers. Multiple scans are justified for patients with SWE values near clinical thresholds. Since healthy volunteers exceeded the manufacturer-defined threshold, inherent variability between sonographers could challenge the reliability of clinical thresholds in practice.</p>","PeriodicalId":94138,"journal":{"name":"Medical ultrasonography","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical ultrasonography","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11152/mu-4521","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Aims: This study aims to estimate the degree of sonographers as a source of systematic variance for Shear Wave Elastography (SWE) values.
Materials and methods: Two studies estimated variance in SWE measurements: 1) within-subjects and between-sonographer differences, and 2) between-sonographer differences alone. Both used a block design with six trained sonographers scanning six healthy liver volunteers using the same machine. Following training, each sonographer obtained ten SWE measurements from the right liver lobe for each volunteer per manufacturer guidelines.
Results: When patients were scanned on different days, intraclass correlation coefficient (ICC)=0.23 was achieved, and when scanned on the same day, ICC=0.83, indicating that 17% of the variability was due to differences between sonographers. This 17% inter-sonographer variability translated into statistical and potentially clinically significant differences between sonographers-one sonographer had a SWE value of (4.99) and another (5.43), p<0.01, almost passing a clinical threshold.
Conclusion: SWE values are influenced by a sonographer effect, highlighting the need to standardize protocols to minimize systematic variability between sonographers. Multiple scans are justified for patients with SWE values near clinical thresholds. Since healthy volunteers exceeded the manufacturer-defined threshold, inherent variability between sonographers could challenge the reliability of clinical thresholds in practice.