基于立体视觉的机器人辅助手术可变力反馈检索方法(使用修改后的 Inception ResNet V2 网络

IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
P. V. Sabique, Ganesh Pasupathy, S. Kalaimagal, G. Shanmugasundar, V. K. Muneer
{"title":"基于立体视觉的机器人辅助手术可变力反馈检索方法(使用修改后的 Inception ResNet V2 网络","authors":"P. V. Sabique, Ganesh Pasupathy, S. Kalaimagal, G. Shanmugasundar, V. K. Muneer","doi":"10.1007/s10846-024-02100-8","DOIUrl":null,"url":null,"abstract":"<p>The surge of haptic technology has greatly impacted Robotic-assisted surgery in recent years due to its inspirational advancement in the field. Delivering tactile feedback to the surgeon has a significant role in improving the user experience in RAMIS. This work proposes a Modified inception ResNet network along with dimensionality reduction to regenerate the variable force produced during the surgical intervention. This work collects the relevant dataset from two ex vivo porcine skins and one ex vivo artificial skin for the validation of the results. The proposed framework is used to model both spatial and temporal data collected from the sensors, tissue, manipulators, and surgical tools. The evaluations are based on three distinct datasets with modest variations in tissue properties. The results of the proposed framework show an improvement of force prediction accuracy by 10.81% over RNN, 6.02% over RNN + LSTM, and 3.81% over the CNN + LSTM framework, and torque prediction accuracy by 12.41% over RNN, 5.75% over RNN + LSTM, and 3.75% over CNN + LSTM. The sensitivity study demonstrates that features such as torque (96.93%), deformation (94.02%), position (93.98%), vision (92.12%), stiffness (87.95%), tool diameter (89.24%), rotation (65.10%), and orientation (62.51%) have respective influences on the anticipated force. It was observed that the quality of the predicted force improved by 2.18% when performing feature selection and dimensionality reduction on features collected from tool, manipulator, tissue, and vision data and processing them simultaneously in all four architectures. The method has potential applications for online surgical tasks and surgeon training.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"257 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Stereovision-based Approach for Retrieving Variable Force Feedback in Robotic-Assisted Surgery Using Modified Inception ResNet V2 Networks\",\"authors\":\"P. V. Sabique, Ganesh Pasupathy, S. Kalaimagal, G. Shanmugasundar, V. K. Muneer\",\"doi\":\"10.1007/s10846-024-02100-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The surge of haptic technology has greatly impacted Robotic-assisted surgery in recent years due to its inspirational advancement in the field. Delivering tactile feedback to the surgeon has a significant role in improving the user experience in RAMIS. This work proposes a Modified inception ResNet network along with dimensionality reduction to regenerate the variable force produced during the surgical intervention. This work collects the relevant dataset from two ex vivo porcine skins and one ex vivo artificial skin for the validation of the results. The proposed framework is used to model both spatial and temporal data collected from the sensors, tissue, manipulators, and surgical tools. The evaluations are based on three distinct datasets with modest variations in tissue properties. The results of the proposed framework show an improvement of force prediction accuracy by 10.81% over RNN, 6.02% over RNN + LSTM, and 3.81% over the CNN + LSTM framework, and torque prediction accuracy by 12.41% over RNN, 5.75% over RNN + LSTM, and 3.75% over CNN + LSTM. The sensitivity study demonstrates that features such as torque (96.93%), deformation (94.02%), position (93.98%), vision (92.12%), stiffness (87.95%), tool diameter (89.24%), rotation (65.10%), and orientation (62.51%) have respective influences on the anticipated force. It was observed that the quality of the predicted force improved by 2.18% when performing feature selection and dimensionality reduction on features collected from tool, manipulator, tissue, and vision data and processing them simultaneously in all four architectures. The method has potential applications for online surgical tasks and surgeon training.</p>\",\"PeriodicalId\":54794,\"journal\":{\"name\":\"Journal of Intelligent & Robotic Systems\",\"volume\":\"257 1\",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-05-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Intelligent & Robotic Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10846-024-02100-8\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent & Robotic Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10846-024-02100-8","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

近年来,触觉技术的迅猛发展极大地影响了机器人辅助外科手术,因为它在该领域取得了令人鼓舞的进步。向外科医生提供触觉反馈对于改善 RAMIS 的用户体验具有重要作用。本研究提出了一种改进的萌芽 ResNet 网络,并通过降维来重新生成手术干预过程中产生的可变力。这项工作从两张活体猪皮和一张活体人造皮肤中收集相关数据集,以验证结果。提出的框架用于对从传感器、组织、机械手和手术工具收集到的空间和时间数据进行建模。评估基于组织属性变化不大的三个不同数据集。拟议框架的结果显示,力预测准确率比 RNN 提高了 10.81%,比 RNN + LSTM 提高了 6.02%,比 CNN + LSTM 框架提高了 3.81%;扭矩预测准确率比 RNN 提高了 12.41%,比 RNN + LSTM 提高了 5.75%,比 CNN + LSTM 提高了 3.75%。灵敏度研究表明,扭矩 (96.93%)、变形 (94.02%)、位置 (93.98%)、视觉 (92.12%)、刚度 (87.95%)、刀具直径 (89.24%)、旋转 (65.10%) 和方向 (62.51%) 等特征对预期力有各自的影响。据观察,对从工具、机械手、组织和视觉数据中收集的特征进行特征选择和降维处理,并在所有四种架构中同时进行处理时,预测力的质量提高了 2.18%。该方法有望应用于在线手术任务和外科医生培训。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Stereovision-based Approach for Retrieving Variable Force Feedback in Robotic-Assisted Surgery Using Modified Inception ResNet V2 Networks

The surge of haptic technology has greatly impacted Robotic-assisted surgery in recent years due to its inspirational advancement in the field. Delivering tactile feedback to the surgeon has a significant role in improving the user experience in RAMIS. This work proposes a Modified inception ResNet network along with dimensionality reduction to regenerate the variable force produced during the surgical intervention. This work collects the relevant dataset from two ex vivo porcine skins and one ex vivo artificial skin for the validation of the results. The proposed framework is used to model both spatial and temporal data collected from the sensors, tissue, manipulators, and surgical tools. The evaluations are based on three distinct datasets with modest variations in tissue properties. The results of the proposed framework show an improvement of force prediction accuracy by 10.81% over RNN, 6.02% over RNN + LSTM, and 3.81% over the CNN + LSTM framework, and torque prediction accuracy by 12.41% over RNN, 5.75% over RNN + LSTM, and 3.75% over CNN + LSTM. The sensitivity study demonstrates that features such as torque (96.93%), deformation (94.02%), position (93.98%), vision (92.12%), stiffness (87.95%), tool diameter (89.24%), rotation (65.10%), and orientation (62.51%) have respective influences on the anticipated force. It was observed that the quality of the predicted force improved by 2.18% when performing feature selection and dimensionality reduction on features collected from tool, manipulator, tissue, and vision data and processing them simultaneously in all four architectures. The method has potential applications for online surgical tasks and surgeon training.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Intelligent & Robotic Systems
Journal of Intelligent & Robotic Systems 工程技术-机器人学
CiteScore
7.00
自引率
9.10%
发文量
219
审稿时长
6 months
期刊介绍: The Journal of Intelligent and Robotic Systems bridges the gap between theory and practice in all areas of intelligent systems and robotics. It publishes original, peer reviewed contributions from initial concept and theory to prototyping to final product development and commercialization. On the theoretical side, the journal features papers focusing on intelligent systems engineering, distributed intelligence systems, multi-level systems, intelligent control, multi-robot systems, cooperation and coordination of unmanned vehicle systems, etc. On the application side, the journal emphasizes autonomous systems, industrial robotic systems, multi-robot systems, aerial vehicles, mobile robot platforms, underwater robots, sensors, sensor-fusion, and sensor-based control. Readers will also find papers on real applications of intelligent and robotic systems (e.g., mechatronics, manufacturing, biomedical, underwater, humanoid, mobile/legged robot and space applications, etc.).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信