{"title":"开发机器人辅助根治性前列腺切除术相识别的人工智能模型","authors":"Hideto Ueki, Munenori Uemura, Kiyoyuki Chinzei, Kosuke Takahashi, Naoto Wakita, Yasuyoshi Okamura, Kotaro Suzuki, Yukari Bando, Takuto Hara, Tomoaki Terakawa, Akihisa Yao, Jun Teishima, Koji Chiba, Hideaki Miyake","doi":"10.1111/bju.16862","DOIUrl":null,"url":null,"abstract":"ObjectivesTo develop and evaluate a convolutional neural network (CNN)‐based model for recognising surgical phases in robot‐assisted laparoscopic radical prostatectomy (RARP), with an emphasis on model interpretability and cross‐platform validation.MethodsA CNN using EfficientNet B7 was trained on video data from 75 RARP cases with the hinotori robotic system. Seven phases were annotated: bladder drop, prostate preparation, bladder neck dissection, seminal vesicle dissection, posterior dissection, apical dissection, and vesicourethral anastomosis. A total of 808 774 video frames were extracted at 1 frame/s for training and testing. Validation was performed on 25 RARP cases using the da Vinci robotic system to assess cross‐platform generalisability. Gradient‐weighted class activation mapping was used to enhance interpretability by identifying key regions of interest for phase classification.ResultsThe CNN achieved 0.90 accuracy on the hinotori test set but dropped to 0.64 on the da Vinci dataset, thus indicating cross‐platform limitations. Phase‐specific F1 scores ranged from 0.77 to 0.97, with lower performance in the phase of seminal vesicle dissection, and apical dissection. Gradient‐weighted class activation mapping visualisations revealed the model's focus on central pelvic structures rather than transient instruments, enhancing interpretability and insights into phase classification.ConclusionsThe model demonstrated high accuracy on a single robotic platform but requires further refinement for consistent cross‐platform performance. Interpretability techniques will foster clinical trust and integration into workflows, advancing robotic surgery applications.","PeriodicalId":8985,"journal":{"name":"BJU International","volume":"13 1","pages":""},"PeriodicalIF":4.4000,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Developing an artificial intelligence model for phase recognition in robot‐assisted radical prostatectomy\",\"authors\":\"Hideto Ueki, Munenori Uemura, Kiyoyuki Chinzei, Kosuke Takahashi, Naoto Wakita, Yasuyoshi Okamura, Kotaro Suzuki, Yukari Bando, Takuto Hara, Tomoaki Terakawa, Akihisa Yao, Jun Teishima, Koji Chiba, Hideaki Miyake\",\"doi\":\"10.1111/bju.16862\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ObjectivesTo develop and evaluate a convolutional neural network (CNN)‐based model for recognising surgical phases in robot‐assisted laparoscopic radical prostatectomy (RARP), with an emphasis on model interpretability and cross‐platform validation.MethodsA CNN using EfficientNet B7 was trained on video data from 75 RARP cases with the hinotori robotic system. Seven phases were annotated: bladder drop, prostate preparation, bladder neck dissection, seminal vesicle dissection, posterior dissection, apical dissection, and vesicourethral anastomosis. A total of 808 774 video frames were extracted at 1 frame/s for training and testing. Validation was performed on 25 RARP cases using the da Vinci robotic system to assess cross‐platform generalisability. Gradient‐weighted class activation mapping was used to enhance interpretability by identifying key regions of interest for phase classification.ResultsThe CNN achieved 0.90 accuracy on the hinotori test set but dropped to 0.64 on the da Vinci dataset, thus indicating cross‐platform limitations. Phase‐specific F1 scores ranged from 0.77 to 0.97, with lower performance in the phase of seminal vesicle dissection, and apical dissection. Gradient‐weighted class activation mapping visualisations revealed the model's focus on central pelvic structures rather than transient instruments, enhancing interpretability and insights into phase classification.ConclusionsThe model demonstrated high accuracy on a single robotic platform but requires further refinement for consistent cross‐platform performance. Interpretability techniques will foster clinical trust and integration into workflows, advancing robotic surgery applications.\",\"PeriodicalId\":8985,\"journal\":{\"name\":\"BJU International\",\"volume\":\"13 1\",\"pages\":\"\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2025-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BJU International\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1111/bju.16862\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"UROLOGY & NEPHROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BJU International","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/bju.16862","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"UROLOGY & NEPHROLOGY","Score":null,"Total":0}
Developing an artificial intelligence model for phase recognition in robot‐assisted radical prostatectomy
ObjectivesTo develop and evaluate a convolutional neural network (CNN)‐based model for recognising surgical phases in robot‐assisted laparoscopic radical prostatectomy (RARP), with an emphasis on model interpretability and cross‐platform validation.MethodsA CNN using EfficientNet B7 was trained on video data from 75 RARP cases with the hinotori robotic system. Seven phases were annotated: bladder drop, prostate preparation, bladder neck dissection, seminal vesicle dissection, posterior dissection, apical dissection, and vesicourethral anastomosis. A total of 808 774 video frames were extracted at 1 frame/s for training and testing. Validation was performed on 25 RARP cases using the da Vinci robotic system to assess cross‐platform generalisability. Gradient‐weighted class activation mapping was used to enhance interpretability by identifying key regions of interest for phase classification.ResultsThe CNN achieved 0.90 accuracy on the hinotori test set but dropped to 0.64 on the da Vinci dataset, thus indicating cross‐platform limitations. Phase‐specific F1 scores ranged from 0.77 to 0.97, with lower performance in the phase of seminal vesicle dissection, and apical dissection. Gradient‐weighted class activation mapping visualisations revealed the model's focus on central pelvic structures rather than transient instruments, enhancing interpretability and insights into phase classification.ConclusionsThe model demonstrated high accuracy on a single robotic platform but requires further refinement for consistent cross‐platform performance. Interpretability techniques will foster clinical trust and integration into workflows, advancing robotic surgery applications.
期刊介绍:
BJUI is one of the most highly respected medical journals in the world, with a truly international range of published papers and appeal. Every issue gives invaluable practical information in the form of original articles, reviews, comments, surgical education articles, and translational science articles in the field of urology. BJUI employs topical sections, and is in full colour, making it easier to browse or search for something specific.