{"title":"A Pneumatic Optical Soft Sensor for Fingertip Force Sensing","authors":"Le Chen, Boshen Qi, Jun Sheng","doi":"10.1109/ismr48346.2021.9661559","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661559","url":null,"abstract":"This paper presents the design and development of a pneumatic optical soft sensor with a potential application to fingertip force sensing. To enable safe and successful interaction with delicate objects, it is important to measure contact force. In particular, prosthetic hands require force sensing at fingertips to enable prosthetic hands to apply appropriate force on objects when performing tasks in daily life. Emerging artificial skins usually feature delicate electronics requiring special packaging to survive in an unstructured environment. In this project, we present a robust soft force sensor with a low profile and high compliance. It consists of a soft silicone base, an inflatable chamber, a hyperelastic membrane, and a photo interrupter. External force applied on the inflated membrane will cause the change of light reflection inside the chamber and thus change the signal output of the photo interrupter. The working principle of the developed sensor is modeled, and experimental studies are performed to evaluate the working performance of the sensor and calibrate the measurements.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126580043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Mariani, Matteo Conti, S. Gandah, C. G. D. Paratesi, A. Menciassi
{"title":"Prototyping a sensorized tool wristband for objective skill assessment and feedback during training in minimally invasive surgery","authors":"A. Mariani, Matteo Conti, S. Gandah, C. G. D. Paratesi, A. Menciassi","doi":"10.1109/ismr48346.2021.9661567","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661567","url":null,"abstract":"Skill assessment is a key component of surgical practical training. Towards an objective, automatic and cost-effective skill evaluation, this work introduces a preliminary sensorized wristband as a training add-on for standard minimally invasive surgical tools. The prototype herein presented allows to classify the presence of the tool in the camera field of view, as well as to provide feedback accordingly. A usability study on 14 non-medical participants was carried out using the da Vinci Research Kit. Results demonstrated the classification accuracy of the method and the usefulness of the feedback to minimize the time spent with the tool out of the field of view. Embedding additional sensors and testing usability on surgical residents will pave the way towards the evolution of this proof of concept to an advanced prototype to use in a real training setting.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121947820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robot Force Estimation with Learned Intraoperative Correction","authors":"J. Wu, Nural Yilmaz, U. Tumerdem, P. Kazanzides","doi":"10.1109/ismr48346.2021.9661568","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661568","url":null,"abstract":"Measurement of environment interaction forces during robotic minimally-invasive surgery would enable haptic feedback to the surgeon, thereby solving one long-standing limitation. Estimating this force from existing sensor data avoids the challenge of retrofitting systems with force sensors, but is difficult due to mechanical effects such as friction and compliance in the robot mechanism. We have previously shown that neural networks can be trained to estimate the internal robot joint torques, thereby enabling estimation of external forces on the da Vinci Research Kit (dVRK). In this work, we extend the method to estimate external Cartesian forces and torques, and also present a two-step approach to adapt to the specific surgical setup by compensating for forces due to the interactions between the instrument shaft and cannula seal and between the trocar and patient body. Experiments show that this approach provides estimates of external forces and torques within a mean root-mean-square error (RMSE) of 1.8N and 0.1Nm, respectively. Furthermore, the two-step approach can add as little as 5 minutes to the surgery setup time, with about 4 minutes to collect intraoperative training data and 1 minute to train the second-step network.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122944924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Huber, John Bason Mitchell, Ross Henry, S. Ourselin, Tom Kamiel Magda Vercauteren, C. Bergeles
{"title":"Homography-based Visual Servoing with Remote Center of Motion for Semi-autonomous Robotic Endoscope Manipulation","authors":"M. Huber, John Bason Mitchell, Ross Henry, S. Ourselin, Tom Kamiel Magda Vercauteren, C. Bergeles","doi":"10.1109/ISMR48346.2021.9661563","DOIUrl":"https://doi.org/10.1109/ISMR48346.2021.9661563","url":null,"abstract":"The dominant visual servoing approaches in Minimally Invasive Surgery (MIS) follow single points or adapt the endoscope’s field of view based on the surgical tools’ distance. These methods rely on point positions with respect to the camera frame to infer a control policy. Deviating from the dominant methods, we formulate a robotic controller that allows for image-based visual servoing that requires neither explicit tool and camera positions nor any explicit image depth information. The proposed method relies on homography-based image registration, which changes the automation paradigm from point-centric towards surgical-scene-centric approach. It simultaneously respects a programmable Remote Center of Motion (RCM). Our approach allows a surgeon to build a graph of desired views, from which, once built, views can be manually selected and automatically servoed to irrespective of robot-patient frame transformation changes. We evaluate our method on an abdominal phantom and provide an open source ROS Moveit integration for use with any serial manipulator 3. A video is provided 4.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114545883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yixuan Huang, Michael Bentley, Tucker Hermans, A. Kuntz
{"title":"Toward Learning Context-Dependent Tasks from Demonstration for Tendon-Driven Surgical Robots","authors":"Yixuan Huang, Michael Bentley, Tucker Hermans, A. Kuntz","doi":"10.1109/ismr48346.2021.9661534","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661534","url":null,"abstract":"Tendon-driven robots, a type of continuum robot, have the potential to reduce the invasiveness of surgery by enabling access to difficult-to-reach anatomical targets. In the future, the automation of surgical tasks for these robots may help reduce surgeon strain in the face of a rapidly growing population. However, directly encoding surgical tasks and their associated context for these robots is infeasible. In this work we take steps toward a system that is able to learn to successfully perform context-dependent surgical tasks by learning directly from a set of expert demonstrations. We present three models trained on the demonstrations conditioned on a vector encoding the context of the demonstration. We then use these models to plan and execute motions for the tendon-driven robot similar to the demonstrations for novel context not seen in the training set. We demonstrate the efficacy of our method on three surgery-inspired tasks.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126607853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Planning Sensing Sequences for Subsurface 3D Tumor Mapping","authors":"Brian Y. Cho, Tucker Hermans, A. Kuntz","doi":"10.1109/ismr48346.2021.9661488","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661488","url":null,"abstract":"Surgical automation has the potential to enable increased precision and reduce the per-patient workload of overburdened human surgeons. An effective automation system must be able to sense and map subsurface anatomy, such as tumors, efficiently and accurately. In this work, we present a method that plans a sequence of sensing actions to map the 3D geometry of subsurface tumors. We leverage a sequential Bayesian Hilbert map to create a 3D probabilistic occupancy model that represents the likelihood that any given point in the anatomy is occupied by a tumor, conditioned on sensor readings. We iteratively update the map, utilizing Bayesian optimization to determine sensing poses that explore unsensed regions of anatomy and exploit the knowledge gained by previous sensing actions. We demonstrate our method’s efficiency and accuracy in three anatomical scenarios including a liver tumor scenario generated from a real patient’s CT scan. The results show that our proposed method significantly outperforms comparison methods in terms of efficiency while detecting subsurface tumors with high accuracy.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114151927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ameya Pore, E. Tagliabue, M. Piccinelli, D. Dall’Alba, A. Casals, P. Fiorini
{"title":"Learning from Demonstrations for Autonomous Soft-tissue Retraction *","authors":"Ameya Pore, E. Tagliabue, M. Piccinelli, D. Dall’Alba, A. Casals, P. Fiorini","doi":"10.1109/ismr48346.2021.9661514","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661514","url":null,"abstract":"The current research focus in Robot-Assisted Minimally Invasive Surgery (RAMIS) is directed towards increasing the level of robot autonomy, to place surgeons in a supervisory position. Although Learning from Demonstrations (LfD) approaches are among the preferred ways for an autonomous surgical system to learn expert gestures, they require a high number of demonstrations and show poor generalization to the variable conditions of the surgical environment. In this work, we propose an LfD methodology based on Generative Adversarial Imitation Learning (GAIL) that is built on a Deep Reinforcement Learning (DRL) setting. GAIL combines generative adversarial networks to learn the distribution of expert trajectories with a DRL setting to ensure generalisation of trajectories providing human-like behaviour. We consider automation of tissue retraction, a common RAMIS task that involves soft tissues manipulation to expose a region of interest. In our proposed methodology, a small set of expert trajectories can be acquired through the da Vinci Research Kit (dVRK) and used to train the proposed LfD method inside a simulated environment. Results indicate that our methodology can accomplish the tissue retraction task with human-like behaviour while being more sample-efficient than the baseline DRL method. Towards the end, we show that the learnt policies can be successfully transferred to the real robotic platform and deployed for soft tissue retraction on a synthetic phantom.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124776690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Richter, E. Funk, Won Seo Park, R. Orosco, Michael C. Yip
{"title":"From Bench to Bedside: The First Live Robotic Surgery on the dVRK to Enable Remote Telesurgery with Motion Scaling","authors":"Florian Richter, E. Funk, Won Seo Park, R. Orosco, Michael C. Yip","doi":"10.1109/ismr48346.2021.9661536","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661536","url":null,"abstract":"Innovations from surgical robotic research rarely translates to live surgery due to the significant difference between the lab and a live environment. Live environments require considerations that are often overlooked during early stages of research such as surgical staff, surgical procedure, and the challenges of working with live tissue. One such example is the da Vinci Research Kit (dVRK) which is used by over 40 robotics research groups and represents an open-sourced version of the da Vinci ® Surgical System. Despite dVRK being available for nearly a decade and the ideal candidate for translating research to practice on over 5,000 da Vinci ® Systems used in hospitals around the world, not one live surgery has been conducted with it. In this paper, we address the challenges, considerations, and solutions for translating surgical robotic research from bench-to-bedside. This is explained from the perspective of a remote telesurgery scenario where motion scaling solutions previously experimented in a lab setting are translated to a live pig surgery. This study presents results from the first ever use of a dVRK in a live animal and discusses how the surgical robotics community can approach translating their research to practice.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124382778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Autonomous tissue retraction with a biomechanically informed logic based framework","authors":"D. Meli, E. Tagliabue, D. Dall’Alba, P. Fiorini","doi":"10.1109/ismr48346.2021.9661573","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661573","url":null,"abstract":"Autonomy in robot-assisted surgery is essential to reduce surgeons’ cognitive load and eventually improve the overall surgical outcome. A key requirement for autonomy in a safety-critical scenario as surgery lies in the generation of interpretable plans that rely on expert knowledge. Moreover, the Autonomous Robotic Surgical System (ARSS) must be able to reason on the dynamic and unpredictable anatomical environment, and quickly adapt the surgical plan in case of unexpected situations. In this paper, we present a modular Framework for Robot-Assisted Surgery (FRAS) in deformable anatomical environments. Our framework integrates a logic module for task-level interpretable reasoning, a biomechanical simulation that complements data from real sensors, and a situation awareness module for context interpretation. The framework performance is evaluated on simulated soft tissue retraction, a common surgical task to remove the tissue hiding a region of interest. Results show that the framework has the adaptability required to successfully accomplish the task, handling dynamic environmental conditions and possible failures, while guaranteeing the computational efficiency required in a real surgical scenario. The framework is made publicly available.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125201845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Zevallos, Evan Harber, Abhimanyu, K. Patel, Yizhu Gu, Kenny Sladick, F. Guyette, L. Weiss, M. Pinsky, H. Gómez, J. Galeotti, H. Choset
{"title":"Toward Robotically Automated Femoral Vascular Access","authors":"N. Zevallos, Evan Harber, Abhimanyu, K. Patel, Yizhu Gu, Kenny Sladick, F. Guyette, L. Weiss, M. Pinsky, H. Gómez, J. Galeotti, H. Choset","doi":"10.1109/ismr48346.2021.9661560","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661560","url":null,"abstract":"Advanced resuscitative technologies, such as Extra Corporeal Membrane Oxygenation (ECMO) cannulation or Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA), are technically difficult even for skilled medical personnel. This paper describes the core technologies that comprise a teleoperated system capable of granting femoral vascular access, an essential step in these procedures, and a significant roadblock in their broader use in the field. These technologies include a kinematic manipulator, various sensing modalities, and a user interface. In addition, we evaluate our system on a surgical phantom as well as in-vivo porcine experiments. To the best of our knowledge, these resulted in the first robot-assisted arterial catheterizations, a significant step towards our eventual goal of automatic catheter insertion through the Seldinger technique.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124981285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}