Zong H Peng, Kathleen M Ham, Jane Ladlow, Carrie Stefaniak, Nicholas D Jeffery, Kelley M Thieman Mankin
{"title":"Comparison of remote and in-person respiratory function grading of brachycephalic dogs.","authors":"Zong H Peng, Kathleen M Ham, Jane Ladlow, Carrie Stefaniak, Nicholas D Jeffery, Kelley M Thieman Mankin","doi":"10.1111/vsu.14171","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To compare the reliability of respiratory function grading (RFG) scores assigned in-person and remotely via video and electronic stethoscope recordings, evaluated by novice and expert graders.</p><p><strong>Study design: </strong>Prospective study.</p><p><strong>Sample population: </strong>Fifty-seven brachycephalic dogs.</p><p><strong>Methods: </strong>Dogs were evaluated in person by expert graders and RFG scores were assigned. Audio and video recordings were made during the in-person evaluations. Four expert and four novice graders evaluated the recordings and assigned an RFG score to each dog. Agreement between in-person and remote RFG scores was assessed using Cohen's kappa statistic. Interobserver reliability was assessed using Fleiss' kappa statistic.</p><p><strong>Results: </strong>The median RFG score from the in-person assessment was 1 (range, 0-3). Distribution of RFG scores included 12 grade 0 scores, 19 grade 1 scores, 25 grade 2 scores, and 1 grade 3 score. The raw percentage agreements between remote and in-person scores were 68.4%, 59.6%, 64.9%, and 61.4% for the four experts, and 52.6%, 64.9%, 50.9%, and 42.1% for the four novices. Reliability between remote and in-person RFG scores was poor to moderate both for the experts (Cohen's kappa: .48, .37, .46, .41) and novices (Cohen's kappa: .28, .47, .28, .21). Interobserver reliability was moderate among the experts (Fleiss' kappa: .59) and poor among the novices (Fleiss' kappa: .39).</p><p><strong>Conclusion: </strong>Remote RFG scores had poor to moderate interassessment and interobserver reliability. Novice evaluators performed worse than experts for remote or in-person RFG evaluations.</p><p><strong>Clinical significance: </strong>Remote RFG, as measured in this study, is not reliable for assigning RFG scores. Modifications could be made to remote evaluation to improve reliability. Based upon the performance of novice evaluators, training of evaluators is justified.</p>","PeriodicalId":23667,"journal":{"name":"Veterinary Surgery","volume":" ","pages":""},"PeriodicalIF":1.3000,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Veterinary Surgery","FirstCategoryId":"97","ListUrlMain":"https://doi.org/10.1111/vsu.14171","RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"VETERINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: To compare the reliability of respiratory function grading (RFG) scores assigned in-person and remotely via video and electronic stethoscope recordings, evaluated by novice and expert graders.
Methods: Dogs were evaluated in person by expert graders and RFG scores were assigned. Audio and video recordings were made during the in-person evaluations. Four expert and four novice graders evaluated the recordings and assigned an RFG score to each dog. Agreement between in-person and remote RFG scores was assessed using Cohen's kappa statistic. Interobserver reliability was assessed using Fleiss' kappa statistic.
Results: The median RFG score from the in-person assessment was 1 (range, 0-3). Distribution of RFG scores included 12 grade 0 scores, 19 grade 1 scores, 25 grade 2 scores, and 1 grade 3 score. The raw percentage agreements between remote and in-person scores were 68.4%, 59.6%, 64.9%, and 61.4% for the four experts, and 52.6%, 64.9%, 50.9%, and 42.1% for the four novices. Reliability between remote and in-person RFG scores was poor to moderate both for the experts (Cohen's kappa: .48, .37, .46, .41) and novices (Cohen's kappa: .28, .47, .28, .21). Interobserver reliability was moderate among the experts (Fleiss' kappa: .59) and poor among the novices (Fleiss' kappa: .39).
Conclusion: Remote RFG scores had poor to moderate interassessment and interobserver reliability. Novice evaluators performed worse than experts for remote or in-person RFG evaluations.
Clinical significance: Remote RFG, as measured in this study, is not reliable for assigning RFG scores. Modifications could be made to remote evaluation to improve reliability. Based upon the performance of novice evaluators, training of evaluators is justified.
期刊介绍:
Veterinary Surgery, the official publication of the American College of Veterinary Surgeons and European College of Veterinary Surgeons, is a source of up-to-date coverage of surgical and anesthetic management of animals, addressing significant problems in veterinary surgery with relevant case histories and observations.
It contains original, peer-reviewed articles that cover developments in veterinary surgery, and presents the most current review of the field, with timely articles on surgical techniques, diagnostic aims, care of infections, and advances in knowledge of metabolism as it affects the surgical patient. The journal places new developments in perspective, encompassing new concepts and peer commentary to help better understand and evaluate the surgical patient.