Lisa A Dudley, Craig A Smith, Brandon K Olson, Nicole J Chimera, Brian Schmitz, Meghan Warren
{"title":"不同教育背景卫生专业人员塔克跳评估的解释者和解释者信度。","authors":"Lisa A Dudley, Craig A Smith, Brandon K Olson, Nicole J Chimera, Brian Schmitz, Meghan Warren","doi":"10.1155/2013/483503","DOIUrl":null,"url":null,"abstract":"<p><p>Objective. The Tuck Jump Assessment (TJA), a clinical plyometric assessment, identifies 10 jumping and landing technique flaws. The study objective was to investigate TJA interrater and intrarater reliability with raters of different educational and clinical backgrounds. Methods. 40 participants were video recorded performing the TJA using published protocol and instructions. Five raters of varied educational and clinical backgrounds scored the TJA. Each score of the 10 technique flaws was summed for the total TJA score. Approximately one month later, 3 raters scored the videos again. Intraclass correlation coefficients determined interrater (5 and 3 raters for first and second session, resp.) and intrarater (3 raters) reliability. Results. Interrater reliability with 5 raters was poor (ICC = 0.47; 95% confidence intervals (CI) 0.33-0.62). Interrater reliability between 3 raters who completed 2 scoring sessions improved from 0.52 (95% CI 0.35-0.68) for session one to 0.69 (95% CI 0.55-0.81) for session two. Intrarater reliability was poor to moderate, ranging from 0.44 (95% CI 0.22-0.68) to 0.72 (95% CI 0.55-0.84). Conclusion. Published protocol and training of raters were insufficient to allow consistent TJA scoring. There may be a learned effect with the TJA since interrater reliability improved with repetition. TJA instructions and training should be modified and enhanced before clinical implementation. </p>","PeriodicalId":73953,"journal":{"name":"","volume":"2013 ","pages":"483503"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2013/483503","citationCount":"28","resultStr":"{\"title\":\"Interrater and Intrarater Reliability of the Tuck Jump Assessment by Health Professionals of Varied Educational Backgrounds.\",\"authors\":\"Lisa A Dudley, Craig A Smith, Brandon K Olson, Nicole J Chimera, Brian Schmitz, Meghan Warren\",\"doi\":\"10.1155/2013/483503\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Objective. The Tuck Jump Assessment (TJA), a clinical plyometric assessment, identifies 10 jumping and landing technique flaws. The study objective was to investigate TJA interrater and intrarater reliability with raters of different educational and clinical backgrounds. Methods. 40 participants were video recorded performing the TJA using published protocol and instructions. Five raters of varied educational and clinical backgrounds scored the TJA. Each score of the 10 technique flaws was summed for the total TJA score. Approximately one month later, 3 raters scored the videos again. Intraclass correlation coefficients determined interrater (5 and 3 raters for first and second session, resp.) and intrarater (3 raters) reliability. Results. Interrater reliability with 5 raters was poor (ICC = 0.47; 95% confidence intervals (CI) 0.33-0.62). Interrater reliability between 3 raters who completed 2 scoring sessions improved from 0.52 (95% CI 0.35-0.68) for session one to 0.69 (95% CI 0.55-0.81) for session two. Intrarater reliability was poor to moderate, ranging from 0.44 (95% CI 0.22-0.68) to 0.72 (95% CI 0.55-0.84). Conclusion. Published protocol and training of raters were insufficient to allow consistent TJA scoring. There may be a learned effect with the TJA since interrater reliability improved with repetition. TJA instructions and training should be modified and enhanced before clinical implementation. </p>\",\"PeriodicalId\":73953,\"journal\":{\"name\":\"\",\"volume\":\"2013 \",\"pages\":\"483503\"},\"PeriodicalIF\":0.0,\"publicationDate\":\"2013-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1155/2013/483503\",\"citationCount\":\"28\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1155/2013/483503\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2013/12/16 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2013/483503","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2013/12/16 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28
摘要
目标。塔克跳跃评估(TJA)是一种临床增强性评估,它确定了10个跳跃和落地技术缺陷。研究的目的是探讨不同教育和临床背景的评判员对TJA的解释者和评判者的信度。方法:40名参与者按照公布的方案和说明进行TJA的录像。五名不同教育和临床背景的评分者对TJA进行了评分。将10个技术缺陷的每一个分数相加作为TJA总分。大约一个月后,3名评分员再次对视频进行评分。组内相关系数决定了组间(第一次和第二次分别为5和3个评分者)和组内(3个评分者)的信度。结果。5个评价者间信度较差(ICC = 0.47;95%置信区间(CI) 0.33 ~ 0.62)。完成2个评分阶段的3个评分者之间的评分信度从第一个阶段的0.52 (95% CI 0.35-0.68)提高到第二个阶段的0.69 (95% CI 0.55-0.81)。内部信度差至中等,范围为0.44 (95% CI 0.22-0.68)至0.72 (95% CI 0.55-0.84)。结论。公布的方案和对评分员的培训不足以使TJA评分保持一致。TJA可能有学习效应,因为互译器的可靠性随着重复而提高。临床实施前应修改和加强TJA的指导和培训。
Interrater and Intrarater Reliability of the Tuck Jump Assessment by Health Professionals of Varied Educational Backgrounds.
Objective. The Tuck Jump Assessment (TJA), a clinical plyometric assessment, identifies 10 jumping and landing technique flaws. The study objective was to investigate TJA interrater and intrarater reliability with raters of different educational and clinical backgrounds. Methods. 40 participants were video recorded performing the TJA using published protocol and instructions. Five raters of varied educational and clinical backgrounds scored the TJA. Each score of the 10 technique flaws was summed for the total TJA score. Approximately one month later, 3 raters scored the videos again. Intraclass correlation coefficients determined interrater (5 and 3 raters for first and second session, resp.) and intrarater (3 raters) reliability. Results. Interrater reliability with 5 raters was poor (ICC = 0.47; 95% confidence intervals (CI) 0.33-0.62). Interrater reliability between 3 raters who completed 2 scoring sessions improved from 0.52 (95% CI 0.35-0.68) for session one to 0.69 (95% CI 0.55-0.81) for session two. Intrarater reliability was poor to moderate, ranging from 0.44 (95% CI 0.22-0.68) to 0.72 (95% CI 0.55-0.84). Conclusion. Published protocol and training of raters were insufficient to allow consistent TJA scoring. There may be a learned effect with the TJA since interrater reliability improved with repetition. TJA instructions and training should be modified and enhanced before clinical implementation.