一种用于自动手术手势注释的人工智能算法的开发。

IF 2.2 3区 医学 Q2 SURGERY
Rikke Groth Olsen, Flemming Bjerrum, Annarita Ghosh Andersen, Lars Konge, Andreas Røder, Morten Bo Søndergaard Svendsen
{"title":"一种用于自动手术手势注释的人工智能算法的开发。","authors":"Rikke Groth Olsen, Flemming Bjerrum, Annarita Ghosh Andersen, Lars Konge, Andreas Røder, Morten Bo Søndergaard Svendsen","doi":"10.1007/s11701-025-02556-2","DOIUrl":null,"url":null,"abstract":"<p><p>Surgical gestures analysis is a promising method to assess surgical procedure quality, but manual annotation is time-consuming. We aimed to develop a recurrent neural network for automated surgical gesture annotation using simulated robot-assisted radical prostatectomies. We have previously manually annotated 161 videos with five different surgical gestures (Regular dissection, Hemostatic control, Clip application, Needle handling, and Suturing). We created a model consisting of two neural networks: a pre-trained feature extractor (VisionTransformer using Imagenet) and a classification head (recurrent neural network with a Long Short-Term Memory (LSTM(128) and fully connected layer)). The data set was split into a training + validation set and a test set. The trained model labeled input sequences with one of the five surgical gestures. The overall performance of the neural networks was assessed by metrics for multi-label classification and defined Total Agreement, an extended version of Intersection over Union (IoU). Our neural network could predict the class of surgical gestures with an Area Under the Curve (AUC) of 0.95 (95% CI 0.93-0.96) and an F1-score of 0.71 (95% CI 0.67-0.75). The network could classify each surgical gesture with high accuracies (0.84-0.97) and high specificities (0.90-0.99), but with lower sensitivities (0.62-0.81). The average Total Agreement for each gesture class was between 0.72 (95% CI ± 0.03) and 0.91 (95% CI ± 0.02). We successfully developed a high-performing neural network to analyze gestures in simulated surgical procedures. Our next step is to use the network to annotate videos and evaluate their efficacy in predicting patient outcomes.</p>","PeriodicalId":47616,"journal":{"name":"Journal of Robotic Surgery","volume":"19 1","pages":"404"},"PeriodicalIF":2.2000,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12274238/pdf/","citationCount":"0","resultStr":"{\"title\":\"Development of an artificial intelligence algorithm for automated surgical gestures annotation.\",\"authors\":\"Rikke Groth Olsen, Flemming Bjerrum, Annarita Ghosh Andersen, Lars Konge, Andreas Røder, Morten Bo Søndergaard Svendsen\",\"doi\":\"10.1007/s11701-025-02556-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Surgical gestures analysis is a promising method to assess surgical procedure quality, but manual annotation is time-consuming. We aimed to develop a recurrent neural network for automated surgical gesture annotation using simulated robot-assisted radical prostatectomies. We have previously manually annotated 161 videos with five different surgical gestures (Regular dissection, Hemostatic control, Clip application, Needle handling, and Suturing). We created a model consisting of two neural networks: a pre-trained feature extractor (VisionTransformer using Imagenet) and a classification head (recurrent neural network with a Long Short-Term Memory (LSTM(128) and fully connected layer)). The data set was split into a training + validation set and a test set. The trained model labeled input sequences with one of the five surgical gestures. The overall performance of the neural networks was assessed by metrics for multi-label classification and defined Total Agreement, an extended version of Intersection over Union (IoU). Our neural network could predict the class of surgical gestures with an Area Under the Curve (AUC) of 0.95 (95% CI 0.93-0.96) and an F1-score of 0.71 (95% CI 0.67-0.75). The network could classify each surgical gesture with high accuracies (0.84-0.97) and high specificities (0.90-0.99), but with lower sensitivities (0.62-0.81). The average Total Agreement for each gesture class was between 0.72 (95% CI ± 0.03) and 0.91 (95% CI ± 0.02). We successfully developed a high-performing neural network to analyze gestures in simulated surgical procedures. Our next step is to use the network to annotate videos and evaluate their efficacy in predicting patient outcomes.</p>\",\"PeriodicalId\":47616,\"journal\":{\"name\":\"Journal of Robotic Surgery\",\"volume\":\"19 1\",\"pages\":\"404\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12274238/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Robotic Surgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s11701-025-02556-2\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"SURGERY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Robotic Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s11701-025-02556-2","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

摘要

手术手势分析是一种很有前途的评估手术质量的方法,但手工注释是费时的。我们的目标是开发一个循环神经网络,用于模拟机器人辅助根治性前列腺切除术的自动手术手势注释。我们之前用五种不同的手术姿势(常规解剖,止血控制,夹子应用,针头处理和缝合)手动注释了161个视频。我们创建了一个由两个神经网络组成的模型:一个预训练的特征提取器(使用Imagenet的VisionTransformer)和一个分类头(具有长短期记忆的循环神经网络(LSTM(128)和完全连接层))。数据集分为训练+验证集和测试集。训练后的模型用五种手术手势中的一种来标记输入序列。神经网络的整体性能通过多标签分类和定义的总协议(IoU的扩展版本)进行评估。我们的神经网络可以预测曲线下面积(AUC)为0.95 (95% CI 0.93-0.96)和f1评分为0.71 (95% CI 0.67-0.75)的手术手势类别。该网络对手术手势的分类准确率较高(0.84-0.97),特异性较高(0.90-0.99),但灵敏度较低(0.62-0.81)。每个手势类别的平均总协议在0.72 (95% CI±0.03)和0.91 (95% CI±0.02)之间。我们成功地开发了一个高性能的神经网络来分析模拟外科手术过程中的手势。我们的下一步是使用网络来注释视频,并评估它们在预测患者预后方面的功效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Development of an artificial intelligence algorithm for automated surgical gestures annotation.

Surgical gestures analysis is a promising method to assess surgical procedure quality, but manual annotation is time-consuming. We aimed to develop a recurrent neural network for automated surgical gesture annotation using simulated robot-assisted radical prostatectomies. We have previously manually annotated 161 videos with five different surgical gestures (Regular dissection, Hemostatic control, Clip application, Needle handling, and Suturing). We created a model consisting of two neural networks: a pre-trained feature extractor (VisionTransformer using Imagenet) and a classification head (recurrent neural network with a Long Short-Term Memory (LSTM(128) and fully connected layer)). The data set was split into a training + validation set and a test set. The trained model labeled input sequences with one of the five surgical gestures. The overall performance of the neural networks was assessed by metrics for multi-label classification and defined Total Agreement, an extended version of Intersection over Union (IoU). Our neural network could predict the class of surgical gestures with an Area Under the Curve (AUC) of 0.95 (95% CI 0.93-0.96) and an F1-score of 0.71 (95% CI 0.67-0.75). The network could classify each surgical gesture with high accuracies (0.84-0.97) and high specificities (0.90-0.99), but with lower sensitivities (0.62-0.81). The average Total Agreement for each gesture class was between 0.72 (95% CI ± 0.03) and 0.91 (95% CI ± 0.02). We successfully developed a high-performing neural network to analyze gestures in simulated surgical procedures. Our next step is to use the network to annotate videos and evaluate their efficacy in predicting patient outcomes.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.20
自引率
8.70%
发文量
145
期刊介绍: The aim of the Journal of Robotic Surgery is to become the leading worldwide journal for publication of articles related to robotic surgery, encompassing surgical simulation and integrated imaging techniques. The journal provides a centralized, focused resource for physicians wishing to publish their experience or those wishing to avail themselves of the most up-to-date findings.The journal reports on advance in a wide range of surgical specialties including adult and pediatric urology, general surgery, cardiac surgery, gynecology, ENT, orthopedics and neurosurgery.The use of robotics in surgery is broad-based and will undoubtedly expand over the next decade as new technical innovations and techniques increase the applicability of its use. The journal intends to capture this trend as it develops.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信