{"title":"基于二维视觉表示和非对称强化学习的机器人手术自主可变形组织回缩系统","authors":"Jiaqi Chen;Guochen Ning;Longfei Ma;Hongen Liao","doi":"10.1109/TMRB.2025.3560399","DOIUrl":null,"url":null,"abstract":"Deformable tissue retraction is a common but time-consuming task in robotic surgery. An autonomous robotic deformable tissue retraction system has the potential to help surgeons reduce cognitive burdens and focus more on critical aspects of the surgery. However, the uncertain deformation and complex constraints of deformable tissues pose significant challenges. We propose an autonomous deformable tissue retraction framework that incorporates visual representation and learning models, along with a 7-degree-of-freedom robotic system. For extracting deformation representations and learning to manipulate deformable tissues based on 2D images, we introduce a Sequential-information-based Contrastive State Representation Learning (SC-SRL) algorithm and a reinforcement learning model with asymmetric inputs and auxiliary losses. Experimental results show that the proposed framework achieved a 93.0% success rate of tissue retraction task in a simulated environment. Furthermore, our method demonstrates a safe retraction trajectory proportion of 92.5% based on a novel evaluation method using the histogram of feature angles of the tissue particles. The proposed framework can also be deployed on a real robotic system through a sim-to-real transfer pipeline, acquire policies for nearby tasks and perform resistance to visual dynamic disturbance. This study paves a new path for the application of vision-based intelligent systems in surgical robotics.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"595-606"},"PeriodicalIF":3.8000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Autonomous Deformable Tissue Retraction System Based on 2-D Visual Representation and Asymmetric Reinforcement Learning for Robotic Surgery\",\"authors\":\"Jiaqi Chen;Guochen Ning;Longfei Ma;Hongen Liao\",\"doi\":\"10.1109/TMRB.2025.3560399\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deformable tissue retraction is a common but time-consuming task in robotic surgery. An autonomous robotic deformable tissue retraction system has the potential to help surgeons reduce cognitive burdens and focus more on critical aspects of the surgery. However, the uncertain deformation and complex constraints of deformable tissues pose significant challenges. We propose an autonomous deformable tissue retraction framework that incorporates visual representation and learning models, along with a 7-degree-of-freedom robotic system. For extracting deformation representations and learning to manipulate deformable tissues based on 2D images, we introduce a Sequential-information-based Contrastive State Representation Learning (SC-SRL) algorithm and a reinforcement learning model with asymmetric inputs and auxiliary losses. Experimental results show that the proposed framework achieved a 93.0% success rate of tissue retraction task in a simulated environment. Furthermore, our method demonstrates a safe retraction trajectory proportion of 92.5% based on a novel evaluation method using the histogram of feature angles of the tissue particles. The proposed framework can also be deployed on a real robotic system through a sim-to-real transfer pipeline, acquire policies for nearby tasks and perform resistance to visual dynamic disturbance. This study paves a new path for the application of vision-based intelligent systems in surgical robotics.\",\"PeriodicalId\":73318,\"journal\":{\"name\":\"IEEE transactions on medical robotics and bionics\",\"volume\":\"7 2\",\"pages\":\"595-606\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on medical robotics and bionics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10964391/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical robotics and bionics","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10964391/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Autonomous Deformable Tissue Retraction System Based on 2-D Visual Representation and Asymmetric Reinforcement Learning for Robotic Surgery
Deformable tissue retraction is a common but time-consuming task in robotic surgery. An autonomous robotic deformable tissue retraction system has the potential to help surgeons reduce cognitive burdens and focus more on critical aspects of the surgery. However, the uncertain deformation and complex constraints of deformable tissues pose significant challenges. We propose an autonomous deformable tissue retraction framework that incorporates visual representation and learning models, along with a 7-degree-of-freedom robotic system. For extracting deformation representations and learning to manipulate deformable tissues based on 2D images, we introduce a Sequential-information-based Contrastive State Representation Learning (SC-SRL) algorithm and a reinforcement learning model with asymmetric inputs and auxiliary losses. Experimental results show that the proposed framework achieved a 93.0% success rate of tissue retraction task in a simulated environment. Furthermore, our method demonstrates a safe retraction trajectory proportion of 92.5% based on a novel evaluation method using the histogram of feature angles of the tissue particles. The proposed framework can also be deployed on a real robotic system through a sim-to-real transfer pipeline, acquire policies for nearby tasks and perform resistance to visual dynamic disturbance. This study paves a new path for the application of vision-based intelligent systems in surgical robotics.