用于自动收集训练数据的开源平台,以支持手术模拟器中基于视频的反馈

J. Laframboise, T. Ungi, K. Sunderland, B. Zevin, G. Fichtinger
{"title":"用于自动收集训练数据的开源平台,以支持手术模拟器中基于视频的反馈","authors":"J. Laframboise, T. Ungi, K. Sunderland, B. Zevin, G. Fichtinger","doi":"10.1117/12.2549878","DOIUrl":null,"url":null,"abstract":"Purpose: Surgical training could be improved by automatic detection of workflow steps, and similar applications of image processing. A platform to collect and organize tracking and video data would enable rapid development of image processing solutions for surgical training. The purpose of this research is to demonstrate 3D Slicer / PLUS Toolkit as a platform for automatic labelled data collection and model deployment. Methods: We use PLUS and 3D Slicer to collect a labelled dataset of tools interacting with tissues in simulated hernia repair, comprised of optical tracking data and video data from a camera. To demonstrate the platform, we train a neural network on this data to automatically identify tissues, and the tracking data is used to identify what tool is in use. The solution is deployed with a custom Slicer module. Results: This platform allowed the collection of 128,548 labelled frames, with 98.5% correctly labelled. A CNN was trained on this data and applied to new data with an accuracy of 98%. With minimal code, this model was deployed in 3D Slicer on real-time data at 30fps. Conclusion: We found the 3D Slicer and PLUS Toolkit platform to be a viable platform for collecting labelled training data and deploying a solution that combines automatic video processing and optical tool tracking. We designed an accurate proof-of-concept system to identify tissue-tool interactions with a trained CNN and optical tracking.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Open-source platform for automated collection of training data to support video-based feedback in surgical simulators\",\"authors\":\"J. Laframboise, T. Ungi, K. Sunderland, B. Zevin, G. Fichtinger\",\"doi\":\"10.1117/12.2549878\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Purpose: Surgical training could be improved by automatic detection of workflow steps, and similar applications of image processing. A platform to collect and organize tracking and video data would enable rapid development of image processing solutions for surgical training. The purpose of this research is to demonstrate 3D Slicer / PLUS Toolkit as a platform for automatic labelled data collection and model deployment. Methods: We use PLUS and 3D Slicer to collect a labelled dataset of tools interacting with tissues in simulated hernia repair, comprised of optical tracking data and video data from a camera. To demonstrate the platform, we train a neural network on this data to automatically identify tissues, and the tracking data is used to identify what tool is in use. The solution is deployed with a custom Slicer module. Results: This platform allowed the collection of 128,548 labelled frames, with 98.5% correctly labelled. A CNN was trained on this data and applied to new data with an accuracy of 98%. With minimal code, this model was deployed in 3D Slicer on real-time data at 30fps. Conclusion: We found the 3D Slicer and PLUS Toolkit platform to be a viable platform for collecting labelled training data and deploying a solution that combines automatic video processing and optical tool tracking. We designed an accurate proof-of-concept system to identify tissue-tool interactions with a trained CNN and optical tracking.\",\"PeriodicalId\":302939,\"journal\":{\"name\":\"Medical Imaging: Image-Guided Procedures\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical Imaging: Image-Guided Procedures\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2549878\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Imaging: Image-Guided Procedures","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2549878","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目的:通过对工作流程步骤的自动检测,以及图像处理的类似应用,可以改善手术训练。一个收集和组织跟踪和视频数据的平台将使外科培训图像处理解决方案的快速发展成为可能。本研究的目的是演示3D切片器/ PLUS工具包作为自动标记数据收集和模型部署的平台。方法:我们使用PLUS和3D切片器收集模拟疝修补中与组织相互作用的工具标记数据集,包括光学跟踪数据和来自摄像机的视频数据。为了演示该平台,我们在这些数据上训练一个神经网络来自动识别组织,并使用跟踪数据来识别正在使用的工具。该解决方案使用自定义切片器模块进行部署。结果:该平台共收集了128,548个标记框,正确率为98.5%。在此数据上训练CNN,并将其应用于新数据,准确率达到98%。用最少的代码,该模型以30fps的实时数据部署在3D切片器中。结论:我们发现3D Slicer和PLUS Toolkit平台是收集标记训练数据和部署结合自动视频处理和光学工具跟踪的解决方案的可行平台。我们设计了一个精确的概念验证系统,通过训练有素的CNN和光学跟踪来识别组织-工具的相互作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Open-source platform for automated collection of training data to support video-based feedback in surgical simulators
Purpose: Surgical training could be improved by automatic detection of workflow steps, and similar applications of image processing. A platform to collect and organize tracking and video data would enable rapid development of image processing solutions for surgical training. The purpose of this research is to demonstrate 3D Slicer / PLUS Toolkit as a platform for automatic labelled data collection and model deployment. Methods: We use PLUS and 3D Slicer to collect a labelled dataset of tools interacting with tissues in simulated hernia repair, comprised of optical tracking data and video data from a camera. To demonstrate the platform, we train a neural network on this data to automatically identify tissues, and the tracking data is used to identify what tool is in use. The solution is deployed with a custom Slicer module. Results: This platform allowed the collection of 128,548 labelled frames, with 98.5% correctly labelled. A CNN was trained on this data and applied to new data with an accuracy of 98%. With minimal code, this model was deployed in 3D Slicer on real-time data at 30fps. Conclusion: We found the 3D Slicer and PLUS Toolkit platform to be a viable platform for collecting labelled training data and deploying a solution that combines automatic video processing and optical tool tracking. We designed an accurate proof-of-concept system to identify tissue-tool interactions with a trained CNN and optical tracking.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信