'Rise of the Machines': Human Factors and training for robotic-assisted surgery.
IF 2.1
Q2 SURGERY
Fiona Kerray, Steven Yule
{"title":"'Rise of the Machines': Human Factors and training for robotic-assisted surgery.","authors":"Fiona Kerray, Steven Yule","doi":"10.1136/bmjsit-2021-000100","DOIUrl":null,"url":null,"abstract":"© Author(s) (or their employer(s)) 2021. Reuse permitted under CC BYNC. No commercial reuse. See rights and permissions. Published by BMJ. Surgery is perpetually at the cutting edge of innovation. And like in other innovative industries, the rate of uptake of new technology often outstrips comprehensive understanding of the systems changes and safety implications encountered. Roboticassisted surgery (RAS) presents potential benefits to patients, including shorter hospital stays, reduced postoperative pain, and quicker recovery time. However, patient safety incidents may be as high as double compared with traditional open surgery, revealing the cost of new technology integration, and reminiscent of the rise of laparoscopic surgery in the early 1990s. Along with a supportive culture and effective systems, highquality training is one of the foundations of successful technology adoption. In the present issue of BMJ Surgery, Interventions & Health Technologies, Butterworth et al present an indepth training programme for roboticassisted surgery, focusing on one specific surgical robot. The authors have developed what appears to be a comprehensive hybrid training programme, combining online education followed by facetoface simulations and cadaver sessions with real surgical teams. This study provides initial validity evidence which is important for technology implementation with the ultimate aim to have a training programme that equips surgeons to expertly embed robotic surgery within their practice. The aim of our editorial is to provide a helpful critique regarding validity, and introduce the role of Human Factors to the successful implementation and evaluation of RAS training. Like many applied studies of this type there are some conceptual and methodological limitations which limit the validity of findings and also broadly applicable to surgical education research. Butterworth et al aimed to evaluate the effectiveness of the training programme, however without defined standards it is unclear whether the training was aimed at improving surgeons’ technical ability or whether it was to train them to proficiency. Implementing a validity framework such as Kirkpatrick’s can be invaluable in this respect, as it allows researchers to evaluate both formal and informal training methods against four levels of criteria: reactions (did the training meet surgeons’ needs?), learning (has knowledge or skill increased?), behaviour (can surgeons now apply robotic surgical skills in real life?), and results (has training improved outcomes and safety?). By applying Kirkpatrick’s lens to the present study, we can say that the highest level of validity is at level 2: learning; as there is some evidence of participant skill improving. However, 2 of the 17 surgeons moved from intermediate to novice level which means that the training was not universally successful and may even have been counterproductive. Heterogeneity in prior experience of surgery and robotics of participants combined with the modest sample size and its subsequent stratification for analysis may have reduced the accuracy of the results. Furthermore, the prerequisite online training seems important for maximising onsite handson time; however, the lengthy duration (10 hours) and the lack of detail on content, objectives and assessment raises questions on how this met surgeons’ needs. A second validity framework, Kane’s, is particularly helpful in the design of surgical education trials as it forces the researcher to justify very clearly the purpose, target sample, and context of intended impact. The framework tests validity evidence against four inferences: scoring, generalisation, extrapolation and implications. For the first inference, scoring, Butterworth et al implemented the Global Evaluative Assessment of Robotic Skills (GEARS) tool. This a strength of the study as this tool has validity evidence, however while a statistically significant improvement in GEARS score is encouraging, this does not equate to competence. In the study, expected GEARS scores for each group were calculated from only three previous studies, the first of coright.","PeriodicalId":33349,"journal":{"name":"BMJ Surgery Interventions Health Technologies","volume":null,"pages":null},"PeriodicalIF":2.1000,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/2d/ea/bmjsit-2021-000100.PMC8647617.pdf","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Surgery Interventions Health Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjsit-2021-000100","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 3
Abstract
© Author(s) (or their employer(s)) 2021. Reuse permitted under CC BYNC. No commercial reuse. See rights and permissions. Published by BMJ. Surgery is perpetually at the cutting edge of innovation. And like in other innovative industries, the rate of uptake of new technology often outstrips comprehensive understanding of the systems changes and safety implications encountered. Roboticassisted surgery (RAS) presents potential benefits to patients, including shorter hospital stays, reduced postoperative pain, and quicker recovery time. However, patient safety incidents may be as high as double compared with traditional open surgery, revealing the cost of new technology integration, and reminiscent of the rise of laparoscopic surgery in the early 1990s. Along with a supportive culture and effective systems, highquality training is one of the foundations of successful technology adoption. In the present issue of BMJ Surgery, Interventions & Health Technologies, Butterworth et al present an indepth training programme for roboticassisted surgery, focusing on one specific surgical robot. The authors have developed what appears to be a comprehensive hybrid training programme, combining online education followed by facetoface simulations and cadaver sessions with real surgical teams. This study provides initial validity evidence which is important for technology implementation with the ultimate aim to have a training programme that equips surgeons to expertly embed robotic surgery within their practice. The aim of our editorial is to provide a helpful critique regarding validity, and introduce the role of Human Factors to the successful implementation and evaluation of RAS training. Like many applied studies of this type there are some conceptual and methodological limitations which limit the validity of findings and also broadly applicable to surgical education research. Butterworth et al aimed to evaluate the effectiveness of the training programme, however without defined standards it is unclear whether the training was aimed at improving surgeons’ technical ability or whether it was to train them to proficiency. Implementing a validity framework such as Kirkpatrick’s can be invaluable in this respect, as it allows researchers to evaluate both formal and informal training methods against four levels of criteria: reactions (did the training meet surgeons’ needs?), learning (has knowledge or skill increased?), behaviour (can surgeons now apply robotic surgical skills in real life?), and results (has training improved outcomes and safety?). By applying Kirkpatrick’s lens to the present study, we can say that the highest level of validity is at level 2: learning; as there is some evidence of participant skill improving. However, 2 of the 17 surgeons moved from intermediate to novice level which means that the training was not universally successful and may even have been counterproductive. Heterogeneity in prior experience of surgery and robotics of participants combined with the modest sample size and its subsequent stratification for analysis may have reduced the accuracy of the results. Furthermore, the prerequisite online training seems important for maximising onsite handson time; however, the lengthy duration (10 hours) and the lack of detail on content, objectives and assessment raises questions on how this met surgeons’ needs. A second validity framework, Kane’s, is particularly helpful in the design of surgical education trials as it forces the researcher to justify very clearly the purpose, target sample, and context of intended impact. The framework tests validity evidence against four inferences: scoring, generalisation, extrapolation and implications. For the first inference, scoring, Butterworth et al implemented the Global Evaluative Assessment of Robotic Skills (GEARS) tool. This a strength of the study as this tool has validity evidence, however while a statistically significant improvement in GEARS score is encouraging, this does not equate to competence. In the study, expected GEARS scores for each group were calculated from only three previous studies, the first of coright.
“机器的崛起”:人为因素和机器人辅助手术的培训。
本文章由计算机程序翻译,如有差异,请以英文原文为准。