Frank Guido-Sanz, Mindi Anderson, S. Talbert, Desiree A. Díaz, Gregory Welch, A. Tanaka
求助PDF
{"title":"用仿真测试I-BIDS的有效性和可靠性:一种新的切换工具","authors":"Frank Guido-Sanz, Mindi Anderson, S. Talbert, Desiree A. Díaz, Gregory Welch, A. Tanaka","doi":"10.1177/10468781221098567","DOIUrl":null,"url":null,"abstract":"Background Patient safety and improved outcomes are core priorities in healthcare, and effective handoffs are essential to these priorities. Validating handoff tools using simulation is a novel approach. Methods The construct validity and instrument reliability of the I-BIDS© tool were tested. In Phase I, construct validity was substantiated with a convenience sample of 21 healthcare providers through an electronic survey. Content Validity Ratio (CVR) was tabulated using Lawshe’s CVR. Interrater reliability was tested in a simulated handoff scenario, in Phase II, with graduate nursing students and two raters, and simulation effectiveness was assessed by students. Results Construct validity was evaluated, and 17 of the 25 items were found significant at the critical level (0.42). Items scoring below were removed, and the tool was reduced by one category. Weighted kappa (Kw) with quadratic weights was run from the scenario data to determine if there was an agreement between raters of handoff performance. There was a statistically significant agreement between the two raters, Kw = .627 (95% CI: .549–.705), p < .001) with good strength of the agreement. SET-M Total mean was 55.64 (SD = 2.46). Discussion The tool showed beginning validity and interrater reliability. The SET-M Learning subscale showed the widest range of scores which suggests the most opportunity for improvement. Use of the tool in simulated scenarios may be one way to test the items further. Conclusions Simulation was effective in facilitating the evaluation of the tool.","PeriodicalId":47521,"journal":{"name":"SIMULATION & GAMING","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2022-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Using Simulation to Test Validity and Reliability of I-BIDS: A New Handoff Tool\",\"authors\":\"Frank Guido-Sanz, Mindi Anderson, S. Talbert, Desiree A. Díaz, Gregory Welch, A. Tanaka\",\"doi\":\"10.1177/10468781221098567\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background Patient safety and improved outcomes are core priorities in healthcare, and effective handoffs are essential to these priorities. Validating handoff tools using simulation is a novel approach. Methods The construct validity and instrument reliability of the I-BIDS© tool were tested. In Phase I, construct validity was substantiated with a convenience sample of 21 healthcare providers through an electronic survey. Content Validity Ratio (CVR) was tabulated using Lawshe’s CVR. Interrater reliability was tested in a simulated handoff scenario, in Phase II, with graduate nursing students and two raters, and simulation effectiveness was assessed by students. Results Construct validity was evaluated, and 17 of the 25 items were found significant at the critical level (0.42). Items scoring below were removed, and the tool was reduced by one category. Weighted kappa (Kw) with quadratic weights was run from the scenario data to determine if there was an agreement between raters of handoff performance. There was a statistically significant agreement between the two raters, Kw = .627 (95% CI: .549–.705), p < .001) with good strength of the agreement. SET-M Total mean was 55.64 (SD = 2.46). Discussion The tool showed beginning validity and interrater reliability. The SET-M Learning subscale showed the widest range of scores which suggests the most opportunity for improvement. Use of the tool in simulated scenarios may be one way to test the items further. Conclusions Simulation was effective in facilitating the evaluation of the tool.\",\"PeriodicalId\":47521,\"journal\":{\"name\":\"SIMULATION & GAMING\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2022-05-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIMULATION & GAMING\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/10468781221098567\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIMULATION & GAMING","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/10468781221098567","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 2
引用
批量引用
Using Simulation to Test Validity and Reliability of I-BIDS: A New Handoff Tool
Background Patient safety and improved outcomes are core priorities in healthcare, and effective handoffs are essential to these priorities. Validating handoff tools using simulation is a novel approach. Methods The construct validity and instrument reliability of the I-BIDS© tool were tested. In Phase I, construct validity was substantiated with a convenience sample of 21 healthcare providers through an electronic survey. Content Validity Ratio (CVR) was tabulated using Lawshe’s CVR. Interrater reliability was tested in a simulated handoff scenario, in Phase II, with graduate nursing students and two raters, and simulation effectiveness was assessed by students. Results Construct validity was evaluated, and 17 of the 25 items were found significant at the critical level (0.42). Items scoring below were removed, and the tool was reduced by one category. Weighted kappa (Kw) with quadratic weights was run from the scenario data to determine if there was an agreement between raters of handoff performance. There was a statistically significant agreement between the two raters, Kw = .627 (95% CI: .549–.705), p < .001) with good strength of the agreement. SET-M Total mean was 55.64 (SD = 2.46). Discussion The tool showed beginning validity and interrater reliability. The SET-M Learning subscale showed the widest range of scores which suggests the most opportunity for improvement. Use of the tool in simulated scenarios may be one way to test the items further. Conclusions Simulation was effective in facilitating the evaluation of the tool.