A Neurosurgical Instrument Segmentation Approach to Assess Microsurgical Movements.

Gleb Danilov, Oleg Pilipenko, Vasiliy Kostyumov, Sergey Trubetskoy, Narek Maloyan, Bulat Nutfullin, Eugeniy Ilyushin, David Pitskhelauri, Alexandra Zelenova, Andrey Bykanov
{"title":"A Neurosurgical Instrument Segmentation Approach to Assess Microsurgical Movements.","authors":"Gleb Danilov, Oleg Pilipenko, Vasiliy Kostyumov, Sergey Trubetskoy, Narek Maloyan, Bulat Nutfullin, Eugeniy Ilyushin, David Pitskhelauri, Alexandra Zelenova, Andrey Bykanov","doi":"10.3233/SHTI241089","DOIUrl":null,"url":null,"abstract":"<p><p>The ability to recognize anatomical landmarks, microsurgical instruments, and complex scenes and events in a surgical wound using computer vision presents new opportunities for studying microsurgery effectiveness. In this study, we aimed to develop an artificial intelligence-based solution for detecting, segmenting, and tracking microinstruments using a neurosurgical microscope. We have developed a technique to process videos from microscope camera, which involves creating a segmentation mask for the instrument and subsequently tracking it. We compared two segmentation approaches: (1) semantic segmentation using Visual Transformers (pre-trained domain-specific EndoViT model), enhanced with tracking as described by Cheng Y. et al. (our proposed approach), and (2) instance segmentation with tracking based on the YOLOv8l-seg architecture. We conducted experiments using the CholecSeg8k dataset and our proprietary set of neurosurgical videos (PSNV) from microscope. Our approach with tracking outperformed YOLOv8l-seg-based solutions and EndoViT model with no tracking on both CholecSeg8k (mean IoT = 0.8158, mean Dice = 0.8657) and PSNV (mean IoT = 0.7196, mean Dice = 0.8202) datasets. Our experiments with identifying neurosurgical instruments in a microscope's field of view showcase the high quality of these technologies and their potential for valuable applications.</p>","PeriodicalId":94357,"journal":{"name":"Studies in health technology and informatics","volume":"321 ","pages":"185-189"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in health technology and informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/SHTI241089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The ability to recognize anatomical landmarks, microsurgical instruments, and complex scenes and events in a surgical wound using computer vision presents new opportunities for studying microsurgery effectiveness. In this study, we aimed to develop an artificial intelligence-based solution for detecting, segmenting, and tracking microinstruments using a neurosurgical microscope. We have developed a technique to process videos from microscope camera, which involves creating a segmentation mask for the instrument and subsequently tracking it. We compared two segmentation approaches: (1) semantic segmentation using Visual Transformers (pre-trained domain-specific EndoViT model), enhanced with tracking as described by Cheng Y. et al. (our proposed approach), and (2) instance segmentation with tracking based on the YOLOv8l-seg architecture. We conducted experiments using the CholecSeg8k dataset and our proprietary set of neurosurgical videos (PSNV) from microscope. Our approach with tracking outperformed YOLOv8l-seg-based solutions and EndoViT model with no tracking on both CholecSeg8k (mean IoT = 0.8158, mean Dice = 0.8657) and PSNV (mean IoT = 0.7196, mean Dice = 0.8202) datasets. Our experiments with identifying neurosurgical instruments in a microscope's field of view showcase the high quality of these technologies and their potential for valuable applications.

评估显微手术移动的神经外科器械分割方法。
利用计算机视觉识别解剖地标、显微手术器械以及手术伤口中的复杂场景和事件的能力,为研究显微手术的有效性提供了新的机遇。在这项研究中,我们旨在开发一种基于人工智能的解决方案,用于使用神经外科显微镜检测、分割和跟踪显微器械。我们开发了一种处理显微镜摄像头视频的技术,包括为器械创建一个分割掩膜,然后对其进行追踪。我们比较了两种分割方法:(1) 使用 Visual Transformers(预先训练好的特定领域 EndoViT 模型)进行语义分割,并按照 Cheng Y. 等人的方法(我们提出的方法)进行跟踪增强;(2) 基于 YOLOv8l-seg 架构的实例分割与跟踪。我们使用 CholecSeg8k 数据集和我们专有的显微镜神经外科视频集(PSNV)进行了实验。在 CholecSeg8k 数据集(平均 IoT = 0.8158,平均 Dice = 0.8657)和 PSNV 数据集(平均 IoT = 0.7196,平均 Dice = 0.8202)上,我们的跟踪方法优于基于 YOLOv8l-seg 的解决方案和无跟踪的 EndoViT 模型。我们在显微镜视场中识别神经外科器械的实验展示了这些技术的高质量及其有价值的应用潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信