{"title":"N-mirror Robot System for Laser Surgery: A Simulation Study","authors":"Guangshen Ma, Weston A. Ross, P. Codd","doi":"10.1109/ISMR57123.2023.10130180","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130180","url":null,"abstract":"Automated laser surgery with sensor fusion is an important problem in medical robotics since it requires precise control of mirrors used to steer the laser systems. The propagation of the laser beam should satisfy the geometric constraints of the surgical site but the relation between the number of mirrors and the design of the optical path remains an unsolved problem. Furthermore, different types of surgery (e.g. endoscopic vs open surgery) can require different optical designs with varying number of mirrors to successfully steer the laser beam to the tissue. A generalized method for controlling the laser beam in such systems remains an open research question. This paper proposes an analytical model for a laser-based surgical system with an arbitrary number of mirrors, which is referred as an ‘ $N$ -mirror” robotic system. This system consists of three laser inputs to transmit the laser beam to the tissue surface through $N$ number of mirrors, which can achieve surface scanning, tissue resection and tissue classification separately. For sensor information alignment, the forward and inverse kinematics of the $N$ -mirror robot system are derived and used to calculate the mirror angles for laser steering at the target surface. We propose a system calibration method to determine the laser input configuration that is required in the kinematic modelling. We conduct simulation experiments for a simulated 3-mirror system of an actual robotic laser platform and a 6-mirror simulated robot, both with 3-laser inputs. The simulation experiments for system calibration show results of maximum position offset smaller than $0.127 {mm}$ and maximum angle offset smaller than 0.05° for the optimal laser input predictions.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131070391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vision-Based Shared Control for Telemanipulated Nasopharyngeal Swab Sampling","authors":"Stephan Andreas Schwarz, Ulrike Thomas","doi":"10.1109/ISMR57123.2023.10130223","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130223","url":null,"abstract":"Telemanipulation enables people to perform tasks in dangerous environments without exposing them to any risk. This also applies for medical applications. Many infections, such as the SARS-CoV-2 virus, spread over the air and can infect the staff while, e.g., taking samples. This paper proposes a shared control algorithm for a telemanipulation system that enables medical staff to easily perform nasopharyngeal swab samplings from a safe distance while maintaining the safety of the patient. We propose a vision-based virtual fixture approach to guide the operator during the approach towards the nostril. Force feedback and velocity scaling is used to improve dexterity and safety during the sampling. We further prove the stability of the system by introducing an energy tank that ensures passivity at all times. Finally, we test the approach on a real telemanipulation setup and demonstrate the improved usability resulting from the guidance of the shared control.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128142878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Closed-form Kinematic Model and Workspace Characterization for Magnetic Ball Chain Robots","authors":"G. Pittiglio, M. Mencattelli, P. Dupont","doi":"10.1109/ISMR57123.2023.10130219","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130219","url":null,"abstract":"Magnetic ball chains are well suited to serve as the steerable tips of endoluminal robots. While it has been demonstrated that these robots produce a larger reachable workspace than magnetic soft continuum robots designed using either distributed or lumped magnetic material, here we investigate the orientational capabilities of these robots. To increase the range of orientations that can be produced at each point in the workspace, we introduce a comparatively-stiff outer sheath from which the steerable ball chain is extended. We present an energy-based kinematic model and also derive an approximate expression for the range of achievable orientations at each point in the workspace. Experiments are used to validate these results.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125247116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experimental Trials with a Shared Autonomy Controller Framework and the da Vinci Research Kit: Pattern Cutting Tasks using Thin Elastic Materials","authors":"Paramjit Singh Baweja, R. Gondokaryono, L. Kahrs","doi":"10.1109/ISMR57123.2023.10130201","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130201","url":null,"abstract":"A technical challenge in robotic soft material cutting is to avoid large local deformations that result in inaccuracies or failure of the task. Additionally, reducing procedure time and human errors that occur due to fatigue and monotony are two of the most anticipated advantages of using robots in execution of repetitive subtasks for minimally invasive surgery. In our paper, we evaluate pattern cutting tasks of 2D elastic materials with shared control using the da Vinci Research Kit (dVRK). For this purpose, we developed a shared autonomy motion generator framework for pattern cutting. The framework registers user-defined Cartesian positions, creates smooth splines, interpolates the Cartesian positions, and generates a trajectory with Cartesian and joint constraints. While a pre-planned trajectory is being executed, the user may provide Cartesian offsets to modify the trajectory. We repeatedly cut shapes on 3 materials with different elasticity. Our shared control method achieved 100% success rate while performing a circular cutting task in a sheet of gauze. The user input compensated for deformations due to tearing. Task completion time of those experiments was 86 seconds (median) / 88 seconds (mean). Median and mean errors were 3.1 mm and 3 mm, respectively. Our work improves the success rate and time of completion of published pattern cutting tasks.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117006690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Koopman Operator-based Extended Kalman Filter for Cosserat Rod Wrench Estimation","authors":"Lingyun Zeng, S. Sadati, C. Bergeles","doi":"10.1109/ISMR57123.2023.10130210","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130210","url":null,"abstract":"This paper proposes an observer-based approach for estimating the wrench (force and moment) acting on a 3D elastic rod, using pose states (i.e. robot shape) along the rod as input/feedback. First, the static rod is considered as a dynamical system evolving its states in spatial dimension, Koopman operator theory is adopted to derive an explicit discrete-arclength model for the rod. Then, Extended Kalman Filter is applied to the derived model to estimate wrench states along the rod. Static balance constraints between pose and wrench are enforced to improve force estimation performance. The developed model and wrench estimation approach are evaluated through representative numerical simulations using a single rod. In representative examples, results show average tip force and moment estimation errors of 0.19 N (10.47%), with maximum 0.73 N (31.90%), and 4.52 mNm (2.25%), with maximum 8.28 mNm (7.36%), respectively. Compared to the state-of-the-art, in close test cases, proposed algorithm obtains slightly lower average tip moment and higher force estimation errors of 2.6% and 4.2%, than 2.7% and 2.2%, respectively.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115671909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Draelos, Pablo Ortiz, A. Narawane, R. McNabb, A. Kuo, J. Izatt
{"title":"Robotic Optical Coherence Tomography of Human Subjects with Posture-Invariant Head and Eye Alignment in Six Degrees of Freedom","authors":"M. Draelos, Pablo Ortiz, A. Narawane, R. McNabb, A. Kuo, J. Izatt","doi":"10.1109/ISMR57123.2023.10130250","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130250","url":null,"abstract":"Ophthalmic optical coherence tomography (OCT) has achieved remarkable clinical success but remains sequestered in ophthalmology specialty offices. Recently introduced robotic OCT systems seek to expand patient access but fall short of their full potential due to significant imaging workspace and motion planning restrictions. Here, we present a next-generation robotic OCT system capable of imaging in any head orientation or posture that is mechanically reachable. This system overcomes prior restrictions by eliminating fixed-base tracking components, extending robot reach, and planning alignment in six degrees of freedom. With this robotic system, we show repeatable subject imaging independent of posture (standing, seated, reclined, and supine) under widely varying head orientations for multiple human subjects. For each subject, we obtained a consistent view of the retina, including the fovea, retinal vasculature, and edge of the optic nerve head. We believe this robotic approach can extend OCT as an eye disease screening, diagnosis, and monitoring tool to previously unreached patient populations.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121792915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An abdominal phantom with instrument tracking for laparoscopic training","authors":"Haochen Wei, C. C. Chen, P. Kazanzides","doi":"10.1109/ISMR57123.2023.10130194","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130194","url":null,"abstract":"We developed an abdominal phantom with an embedded stereo camera for tracking multiple hand-held instruments inserted through entry ports. This system can be used for training laparoscopic surgeons, as well as for training bedside assistants in robotic surgery. We present the computer vision methods used to track multiple instruments in real time, with a system evaluation that shows frame rates of 26.6 fps for a 672×376 image and 11.6 fps for a 1280×780 image and corresponding latencies of 38 ms and 87 ms, respectively, when tested on a portable PC platform with an 11th Gen Intel CPU running at 2.8 GHz. The mean Euclidean distance error of the instrument tracking is 2.0 mm in the 672p case and 2.8 mm in the 1280p case. Additionally, the tracking information drives virtual instruments in a simulated environment, which generates improved visualizations of the surgical scene, such as a top-down view and/or a “laser” virtual extension of the instrument. We perform a user study with 10 novice subjects to compare these improved visualizations to the baseline case (only endoscope view) and the results indicate that the combined top-down view and laser extension enhancements provide a statistically significant performance improvement. In the future, the simulator could also improve the (visual) realism of the training platform and could be part of a larger system that enables simultaneous training (and skill assessment) of multiple members of a surgical team, such as the surgeon and first assistant in da Vinci robotic surgery.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127055332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanzhou Wang, Yangsheng Xu, Ka-Wai Kwok, I. Iordachita
{"title":"In Situ Flexible Needle Adjustment Towards MRI-Guided Spinal Injections Based on Finite Element Simulation","authors":"Yanzhou Wang, Yangsheng Xu, Ka-Wai Kwok, I. Iordachita","doi":"10.1109/ISMR57123.2023.10130218","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130218","url":null,"abstract":"This paper investigates the possibility of roboti-cally performing in situ needle manipulations to correct the needle tip position in the setting of robot-assisted, MRI -guided spinal injections, where real time MRI images cannot be effectively used to guide the needle. Open-loop control of the needle tip is derived from finite element simulation, and the proposed method is tested with ex vivo animal muscle tissues and validated by cone beam computed tomography. Preliminary results have shown promise of performing needle tip correction in situ to improve needle insertion accuracy when real-time feedback is not readily available.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"67 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131490782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Divas Subedi, Wenfan Jiang, Ramisa Tahsin Rahman, Heidi Zhang, Kevin Huang, Yun-Hsuan Su
{"title":"Smoothness Constrained Curiosity Driven Multicamera Trajectory Optimization for Robot-Assisted Minimally Invasive Surgery","authors":"Divas Subedi, Wenfan Jiang, Ramisa Tahsin Rahman, Heidi Zhang, Kevin Huang, Yun-Hsuan Su","doi":"10.1109/ISMR57123.2023.10130237","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130237","url":null,"abstract":"This paper presents a novel, curiosity driven camera positioning algorithm for multicamera systems in robot-assisted minimally invasive laparoscopic procedures. The work here extends the authors' prior studies in curiosity driven movement algorithms by introducing a new jerk-minimization term in the extrinsic curiosity reward function. Three basic and three curiosity driven movement baseline algorithms are comparatively evaluated against the novel motion-smoothing approach in both visual and motion metrics - the latter is analyzed with both time and frequency domain scores. All tests were performed on an identical laparoscopic simulation, with dynamic tissue surface, simulated breathing motion, and virtual tool-tissue interactions incorporated. Multicamera systems can be inserted through a single trocar and be magnetically anchored or maneuvered. Such systems can enable enhanced visual feedback and sensing of the surgical cavity, with multiple simultaneous views affording 3D reconstruction. Results of the study presented here are promising, and show that the modified curiosity driven algorithm does indeed reduce camera jerk at a meager cost to visual and reconstructability metrics. It is hypothesized that reduction in overall camera motion jerk, while still maintaining 3D reconstructability quality, is a desirable characteristic in teleoperated laparoscopic procedures.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134311421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhiling Zou, J. Burgner-Kahrs, T. Looi, James M. Drake
{"title":"Concentric Tube Robot Optimization and Path Planning for Epilepsy Surgeries","authors":"Zhiling Zou, J. Burgner-Kahrs, T. Looi, James M. Drake","doi":"10.1109/ISMR57123.2023.10130244","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130244","url":null,"abstract":"Although robot-assisted minimally invasive surgery (MIS) has been widely investigated in recent years, limited access to the surgical sites in children's small anatomic cavities with current surgical tools is still a major concern in pediatric neurosurgery. Concentric tube robots (CTR), a class of continuum robot with flexible backbones and small diameters, are potential solutions to this problem. To demonstrate CTRs' use in epilepsy surgeries, this work proposes an evolutionary-based optimization framework, which could be used to find optimal patient-specific and procedure-specific CTR design parameters and surgical paths subject to anatomical constraints. Three sets of real patient data are used to evaluate this framework, and the validity of the generated surgical plans are verified by neurosurgeons, the platform's primary users.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125681331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}