A. Soleymani, A. A. S. Asl, Mojtaba Yeganejou, Scott Dick, M. Tavakoli, Xingyu Li
{"title":"从机器人辅助手术记录中评估手术技能","authors":"A. Soleymani, A. A. S. Asl, Mojtaba Yeganejou, Scott Dick, M. Tavakoli, Xingyu Li","doi":"10.1109/ismr48346.2021.9661527","DOIUrl":null,"url":null,"abstract":"Quality and safety are critical elements in the performance of surgeries. Therefore, surgical trainees need to obtain the required degrees of expertise before operating on patients. Conventionally, a trainee’s performance is evaluated by qualitative methods that are time-consuming and prone to bias. Using autonomous and quantitative surgical skill assessment improves the consistency, repeatability, and reliability of the evaluation. To this end, this paper proposes a video-based deep learning framework for surgical skill assessment. By incorporating prior knowledge on surgeon’s activity in the system design, we decompose the complex task of spatio-temporal representation learning from video recordings into two independent, relatively-simple learning processes, which greatly reduces the model size. We evaluate the proposed framework using the publicly available JIGSAWS robotic surgery dataset and demonstrate its capability to learn the underlying features of surgical maneuvers and the dynamic interplay between sequences of actions effectively. The skill level classification accuracy of 97.27% on the public dataset demonstrates the superiority of the proposed model over prior video-based skill assessment methods. The code of this paper will be available on Github at link: ${\\color{blue}{\\text{sourceCode}}}$.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Surgical Skill Evaluation From Robot-Assisted Surgery Recordings\",\"authors\":\"A. Soleymani, A. A. S. Asl, Mojtaba Yeganejou, Scott Dick, M. Tavakoli, Xingyu Li\",\"doi\":\"10.1109/ismr48346.2021.9661527\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Quality and safety are critical elements in the performance of surgeries. Therefore, surgical trainees need to obtain the required degrees of expertise before operating on patients. Conventionally, a trainee’s performance is evaluated by qualitative methods that are time-consuming and prone to bias. Using autonomous and quantitative surgical skill assessment improves the consistency, repeatability, and reliability of the evaluation. To this end, this paper proposes a video-based deep learning framework for surgical skill assessment. By incorporating prior knowledge on surgeon’s activity in the system design, we decompose the complex task of spatio-temporal representation learning from video recordings into two independent, relatively-simple learning processes, which greatly reduces the model size. We evaluate the proposed framework using the publicly available JIGSAWS robotic surgery dataset and demonstrate its capability to learn the underlying features of surgical maneuvers and the dynamic interplay between sequences of actions effectively. The skill level classification accuracy of 97.27% on the public dataset demonstrates the superiority of the proposed model over prior video-based skill assessment methods. The code of this paper will be available on Github at link: ${\\\\color{blue}{\\\\text{sourceCode}}}$.\",\"PeriodicalId\":405817,\"journal\":{\"name\":\"2021 International Symposium on Medical Robotics (ISMR)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Symposium on Medical Robotics (ISMR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ismr48346.2021.9661527\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Symposium on Medical Robotics (ISMR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ismr48346.2021.9661527","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Surgical Skill Evaluation From Robot-Assisted Surgery Recordings
Quality and safety are critical elements in the performance of surgeries. Therefore, surgical trainees need to obtain the required degrees of expertise before operating on patients. Conventionally, a trainee’s performance is evaluated by qualitative methods that are time-consuming and prone to bias. Using autonomous and quantitative surgical skill assessment improves the consistency, repeatability, and reliability of the evaluation. To this end, this paper proposes a video-based deep learning framework for surgical skill assessment. By incorporating prior knowledge on surgeon’s activity in the system design, we decompose the complex task of spatio-temporal representation learning from video recordings into two independent, relatively-simple learning processes, which greatly reduces the model size. We evaluate the proposed framework using the publicly available JIGSAWS robotic surgery dataset and demonstrate its capability to learn the underlying features of surgical maneuvers and the dynamic interplay between sequences of actions effectively. The skill level classification accuracy of 97.27% on the public dataset demonstrates the superiority of the proposed model over prior video-based skill assessment methods. The code of this paper will be available on Github at link: ${\color{blue}{\text{sourceCode}}}$.