Murali Karnam, Marek Zelechowski, Philippe C Cattin, Georg Rauter, Nicolas Gerig
{"title":"在虚拟现实中教授用户指定的逆运动学,减少了手动引导冗余手术机器人的时间和精力。","authors":"Murali Karnam, Marek Zelechowski, Philippe C Cattin, Georg Rauter, Nicolas Gerig","doi":"10.1038/s44172-025-00357-x","DOIUrl":null,"url":null,"abstract":"<p><p>Medical robots should not collide with close by obstacles during medical procedures, such as lamps, screens, or medical personnel. Redundant robots have more degrees of freedom than needed for moving endoscopic tools during surgery and can be reshaped to avoid obstacles by moving purely in the space of these additional degrees of freedom (null space). Although state-of-the-art robots allow surgeons to hand-guide endoscopic tools, reshaping the robot in null space is not intuitive for surgeons. Here we propose a learned task space control that allows surgeons to intuitively teach preferred robot configurations (shapes) that avoid obstacles using a VR-based planner in simulation. Later during surgery, surgeons control both the endoscopic tool and robot configuration (shape) with one hand. In a user study, we found that learned task space control outperformed state-of-the-art naive task space control in all the measured performance metrics (time, effort, and user-perceived effort). Our solution allowed users to intuitively interact with robots in VR and reshape robots while moving tools in medical and industrial applications.</p>","PeriodicalId":72644,"journal":{"name":"Communications engineering","volume":"4 1","pages":"20"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11825656/pdf/","citationCount":"0","resultStr":"{\"title\":\"User-specified inverse kinematics taught in virtual reality reduce time and effort to hand-guide redundant surgical robots.\",\"authors\":\"Murali Karnam, Marek Zelechowski, Philippe C Cattin, Georg Rauter, Nicolas Gerig\",\"doi\":\"10.1038/s44172-025-00357-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Medical robots should not collide with close by obstacles during medical procedures, such as lamps, screens, or medical personnel. Redundant robots have more degrees of freedom than needed for moving endoscopic tools during surgery and can be reshaped to avoid obstacles by moving purely in the space of these additional degrees of freedom (null space). Although state-of-the-art robots allow surgeons to hand-guide endoscopic tools, reshaping the robot in null space is not intuitive for surgeons. Here we propose a learned task space control that allows surgeons to intuitively teach preferred robot configurations (shapes) that avoid obstacles using a VR-based planner in simulation. Later during surgery, surgeons control both the endoscopic tool and robot configuration (shape) with one hand. In a user study, we found that learned task space control outperformed state-of-the-art naive task space control in all the measured performance metrics (time, effort, and user-perceived effort). Our solution allowed users to intuitively interact with robots in VR and reshape robots while moving tools in medical and industrial applications.</p>\",\"PeriodicalId\":72644,\"journal\":{\"name\":\"Communications engineering\",\"volume\":\"4 1\",\"pages\":\"20\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-02-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11825656/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Communications engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1038/s44172-025-00357-x\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1038/s44172-025-00357-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
User-specified inverse kinematics taught in virtual reality reduce time and effort to hand-guide redundant surgical robots.
Medical robots should not collide with close by obstacles during medical procedures, such as lamps, screens, or medical personnel. Redundant robots have more degrees of freedom than needed for moving endoscopic tools during surgery and can be reshaped to avoid obstacles by moving purely in the space of these additional degrees of freedom (null space). Although state-of-the-art robots allow surgeons to hand-guide endoscopic tools, reshaping the robot in null space is not intuitive for surgeons. Here we propose a learned task space control that allows surgeons to intuitively teach preferred robot configurations (shapes) that avoid obstacles using a VR-based planner in simulation. Later during surgery, surgeons control both the endoscopic tool and robot configuration (shape) with one hand. In a user study, we found that learned task space control outperformed state-of-the-art naive task space control in all the measured performance metrics (time, effort, and user-perceived effort). Our solution allowed users to intuitively interact with robots in VR and reshape robots while moving tools in medical and industrial applications.