Gleb Danilov, Oleg Pilipenko, Vasiliy Kostyumov, Sergey Trubetskoy, Narek Maloyan, Bulat Nutfullin, Eugeniy Ilyushin, David Pitskhelauri, Alexandra Zelenova, Andrey Bykanov
{"title":"A Neurosurgical Instrument Segmentation Approach to Assess Microsurgical Movements.","authors":"Gleb Danilov, Oleg Pilipenko, Vasiliy Kostyumov, Sergey Trubetskoy, Narek Maloyan, Bulat Nutfullin, Eugeniy Ilyushin, David Pitskhelauri, Alexandra Zelenova, Andrey Bykanov","doi":"10.3233/SHTI241089","DOIUrl":null,"url":null,"abstract":"<p><p>The ability to recognize anatomical landmarks, microsurgical instruments, and complex scenes and events in a surgical wound using computer vision presents new opportunities for studying microsurgery effectiveness. In this study, we aimed to develop an artificial intelligence-based solution for detecting, segmenting, and tracking microinstruments using a neurosurgical microscope. We have developed a technique to process videos from microscope camera, which involves creating a segmentation mask for the instrument and subsequently tracking it. We compared two segmentation approaches: (1) semantic segmentation using Visual Transformers (pre-trained domain-specific EndoViT model), enhanced with tracking as described by Cheng Y. et al. (our proposed approach), and (2) instance segmentation with tracking based on the YOLOv8l-seg architecture. We conducted experiments using the CholecSeg8k dataset and our proprietary set of neurosurgical videos (PSNV) from microscope. Our approach with tracking outperformed YOLOv8l-seg-based solutions and EndoViT model with no tracking on both CholecSeg8k (mean IoT = 0.8158, mean Dice = 0.8657) and PSNV (mean IoT = 0.7196, mean Dice = 0.8202) datasets. Our experiments with identifying neurosurgical instruments in a microscope's field of view showcase the high quality of these technologies and their potential for valuable applications.</p>","PeriodicalId":94357,"journal":{"name":"Studies in health technology and informatics","volume":"321 ","pages":"185-189"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in health technology and informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/SHTI241089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The ability to recognize anatomical landmarks, microsurgical instruments, and complex scenes and events in a surgical wound using computer vision presents new opportunities for studying microsurgery effectiveness. In this study, we aimed to develop an artificial intelligence-based solution for detecting, segmenting, and tracking microinstruments using a neurosurgical microscope. We have developed a technique to process videos from microscope camera, which involves creating a segmentation mask for the instrument and subsequently tracking it. We compared two segmentation approaches: (1) semantic segmentation using Visual Transformers (pre-trained domain-specific EndoViT model), enhanced with tracking as described by Cheng Y. et al. (our proposed approach), and (2) instance segmentation with tracking based on the YOLOv8l-seg architecture. We conducted experiments using the CholecSeg8k dataset and our proprietary set of neurosurgical videos (PSNV) from microscope. Our approach with tracking outperformed YOLOv8l-seg-based solutions and EndoViT model with no tracking on both CholecSeg8k (mean IoT = 0.8158, mean Dice = 0.8657) and PSNV (mean IoT = 0.7196, mean Dice = 0.8202) datasets. Our experiments with identifying neurosurgical instruments in a microscope's field of view showcase the high quality of these technologies and their potential for valuable applications.