J. Laframboise, T. Ungi, K. Sunderland, B. Zevin, G. Fichtinger
{"title":"Open-source platform for automated collection of training data to support video-based feedback in surgical simulators","authors":"J. Laframboise, T. Ungi, K. Sunderland, B. Zevin, G. Fichtinger","doi":"10.1117/12.2549878","DOIUrl":null,"url":null,"abstract":"Purpose: Surgical training could be improved by automatic detection of workflow steps, and similar applications of image processing. A platform to collect and organize tracking and video data would enable rapid development of image processing solutions for surgical training. The purpose of this research is to demonstrate 3D Slicer / PLUS Toolkit as a platform for automatic labelled data collection and model deployment. Methods: We use PLUS and 3D Slicer to collect a labelled dataset of tools interacting with tissues in simulated hernia repair, comprised of optical tracking data and video data from a camera. To demonstrate the platform, we train a neural network on this data to automatically identify tissues, and the tracking data is used to identify what tool is in use. The solution is deployed with a custom Slicer module. Results: This platform allowed the collection of 128,548 labelled frames, with 98.5% correctly labelled. A CNN was trained on this data and applied to new data with an accuracy of 98%. With minimal code, this model was deployed in 3D Slicer on real-time data at 30fps. Conclusion: We found the 3D Slicer and PLUS Toolkit platform to be a viable platform for collecting labelled training data and deploying a solution that combines automatic video processing and optical tool tracking. We designed an accurate proof-of-concept system to identify tissue-tool interactions with a trained CNN and optical tracking.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Imaging: Image-Guided Procedures","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2549878","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: Surgical training could be improved by automatic detection of workflow steps, and similar applications of image processing. A platform to collect and organize tracking and video data would enable rapid development of image processing solutions for surgical training. The purpose of this research is to demonstrate 3D Slicer / PLUS Toolkit as a platform for automatic labelled data collection and model deployment. Methods: We use PLUS and 3D Slicer to collect a labelled dataset of tools interacting with tissues in simulated hernia repair, comprised of optical tracking data and video data from a camera. To demonstrate the platform, we train a neural network on this data to automatically identify tissues, and the tracking data is used to identify what tool is in use. The solution is deployed with a custom Slicer module. Results: This platform allowed the collection of 128,548 labelled frames, with 98.5% correctly labelled. A CNN was trained on this data and applied to new data with an accuracy of 98%. With minimal code, this model was deployed in 3D Slicer on real-time data at 30fps. Conclusion: We found the 3D Slicer and PLUS Toolkit platform to be a viable platform for collecting labelled training data and deploying a solution that combines automatic video processing and optical tool tracking. We designed an accurate proof-of-concept system to identify tissue-tool interactions with a trained CNN and optical tracking.