Artificial Intelligence for context-aware surgical guidance in complex robot-assisted oncological procedures: an exploratory feasibility study

Pub Date : 2022-05-03 DOI:10.1101/2022.05.02.22274561
F. Kolbinger, S. Leger, Matthias Carstens, F. Rinner, Stefanie Krell, Alexander D. Chernykh, T. P. Nielen, S. Bodenstedt, T. Welsch, J. Kirchberg, J. Fritzmann, Jürgen Weitz, M. Distler, S. Speidel
{"title":"Artificial Intelligence for context-aware surgical guidance in complex robot-assisted oncological procedures: an exploratory feasibility study","authors":"F. Kolbinger, S. Leger, Matthias Carstens, F. Rinner, Stefanie Krell, Alexander D. Chernykh, T. P. Nielen, S. Bodenstedt, T. Welsch, J. Kirchberg, J. Fritzmann, Jürgen Weitz, M. Distler, S. Speidel","doi":"10.1101/2022.05.02.22274561","DOIUrl":null,"url":null,"abstract":"Background: Complex oncological procedures pose various surgical challenges including dissection in distinct tissue planes and preservation of vulnerable anatomical structures throughout different surgical phases. In rectal surgery, a violation of dissection planes increases the risk of local recurrence and autonomous nerve damage resulting in incontinence and sexual dysfunction. While deep learning-based identification of target structures has been described in basic laparoscopic procedures, feasibility of artificial intelligence-based guidance has not yet been investigated in complex abdominal surgery. Methods: A dataset of 57 robot-assisted rectal resection (RARR) videos was split into a pre-training dataset of 24 temporally non-annotated videos and a training dataset of 33 temporally annotated videos. Based on phase annotations and pixel-wise annotations of randomly selected image frames, convolutional neural networks were trained to distinguish surgical phases and phase-specifically segment anatomical structures and tissue planes. To evaluate model performance, F1 score, Intersection-over-Union (IoU), precision, recall, and specificity were determined. Results: We demonstrate that both temporal (average F1 score for surgical phase recognition: 0.78) and spatial features of complex surgeries can be identified using machine learning-based image analysis. Based on analysis of a total of 8797 images with pixel-wise target structure segmentations, mean IoUs for segmentation of anatomical target structures range from 0.09 to 0.82 and from 0.05 to 0.32 for dissection planes and dissection lines throughout different phases of RARR in our analysis. Conclusions: Image-based recognition is a promising technique for surgical guidance in complex surgical procedures. Future research should investigate clinical applicability, usability, and therapeutic impact of a respective guidance system.","PeriodicalId":0,"journal":{"name":"","volume":" ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2022.05.02.22274561","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Background: Complex oncological procedures pose various surgical challenges including dissection in distinct tissue planes and preservation of vulnerable anatomical structures throughout different surgical phases. In rectal surgery, a violation of dissection planes increases the risk of local recurrence and autonomous nerve damage resulting in incontinence and sexual dysfunction. While deep learning-based identification of target structures has been described in basic laparoscopic procedures, feasibility of artificial intelligence-based guidance has not yet been investigated in complex abdominal surgery. Methods: A dataset of 57 robot-assisted rectal resection (RARR) videos was split into a pre-training dataset of 24 temporally non-annotated videos and a training dataset of 33 temporally annotated videos. Based on phase annotations and pixel-wise annotations of randomly selected image frames, convolutional neural networks were trained to distinguish surgical phases and phase-specifically segment anatomical structures and tissue planes. To evaluate model performance, F1 score, Intersection-over-Union (IoU), precision, recall, and specificity were determined. Results: We demonstrate that both temporal (average F1 score for surgical phase recognition: 0.78) and spatial features of complex surgeries can be identified using machine learning-based image analysis. Based on analysis of a total of 8797 images with pixel-wise target structure segmentations, mean IoUs for segmentation of anatomical target structures range from 0.09 to 0.82 and from 0.05 to 0.32 for dissection planes and dissection lines throughout different phases of RARR in our analysis. Conclusions: Image-based recognition is a promising technique for surgical guidance in complex surgical procedures. Future research should investigate clinical applicability, usability, and therapeutic impact of a respective guidance system.
分享
查看原文
人工智能在复杂机器人辅助肿瘤手术中的情景感知外科指导:一项探索性可行性研究
背景:复杂的肿瘤手术过程带来了各种各样的手术挑战,包括在不同的组织平面上解剖和在不同的手术阶段保存脆弱的解剖结构。在直肠手术中,解剖平面的侵犯增加了局部复发和自主神经损伤的风险,导致尿失禁和性功能障碍。虽然基于深度学习的目标结构识别已经在基本的腹腔镜手术中得到了描述,但在复杂的腹部手术中,基于人工智能的指导的可行性尚未得到研究。方法:将57个机器人辅助直肠切除(RARR)视频数据集分为24个时间未注释视频的预训练数据集和33个时间注释视频的训练数据集。基于随机选择的图像帧的相位标注和逐像素标注,训练卷积神经网络来区分手术相位和相位特定的部分解剖结构和组织平面。为了评估模型的性能,我们确定了F1评分、交叉交叉(IoU)、精度、召回率和特异性。结果:我们证明,使用基于机器学习的图像分析可以识别复杂手术的时间(手术阶段识别的平均F1分数:0.78)和空间特征。通过对共8797张逐像素目标结构分割图像的分析,我们的分析发现,在RARR的不同阶段,解剖目标结构分割的平均iou范围为0.09 ~ 0.82,解剖平面和解剖线的平均iou范围为0.05 ~ 0.32。结论:基于图像的识别技术是一种很有前途的复杂外科手术指导技术。未来的研究应调查各自指导系统的临床适用性、可用性和治疗效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信