Leon Schaeffer, David Herrmann, Thomas Schratzenstaller, Sebastian Dendorfer, Valter Bohm
{"title":"Preliminary theoretical considerations on the stiffness characteristics of a tensegrity joint for the use in dynamic orthoses","authors":"Leon Schaeffer, David Herrmann, Thomas Schratzenstaller, Sebastian Dendorfer, Valter Bohm","doi":"10.1142/s2424905x23400081","DOIUrl":"https://doi.org/10.1142/s2424905x23400081","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"223 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138996928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dimitri A Lezcano, Yernar Zhetpissov, Alexandra Cheng, Jin Seob Kim, Iulian I Iordachita
{"title":"Optical Fiber-Based Needle Shape Sensing in Real Tissue: Single Core vs. Multicore Approaches","authors":"Dimitri A Lezcano, Yernar Zhetpissov, Alexandra Cheng, Jin Seob Kim, Iulian I Iordachita","doi":"10.1142/s2424905x23500046","DOIUrl":"https://doi.org/10.1142/s2424905x23500046","url":null,"abstract":"Flexible needle insertion procedures are common for minimally-invasive surgeries for diagnosing and treating prostate cancer. Bevel-tip needles provide physicians the capability to steer the needle during long insertions to avoid vital anatomical structures in the patient and reduce post-operative patient discomfort. To provide needle placement feedback to the physician, sensors are embedded into needles for determining the real-time 3D shape of the needle during operation without needing to visualize the needle intra-operatively. Through expansive research in fiber optics, a plethora of bio-compatible, MRI-compatible, optical shape-sensors have been developed to provide real-time shape feedback, such as single-core and multicore fiber Bragg gratings. In this paper, we directly compare single-core fiber-based and multicore fiber-based needle shape-sensing through identically constructed, four-active area sensorized bevel-tip needles inserted into phantom and ex-vivo tissue on the same experimental platform. In this work, we found that for shape-sensing in phantom tissue, the two needles performed identically with a p-value of 0.164 > 0.05, but in ex-vivo real tissue, the single-core fiber sensorized needle significantly outperformed the multicore fiber configuration with a p-value of 0.0005 < 0.05. This paper also presents the experimental platform and method for directly comparing these optical shape sensors for the needle shape-sensing task, as well as provides direction, insight and required considerations for future work in constructively optimizing sensorized needles.","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"184 1‐6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135775734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karthik Dharmarajan, Will Panitch, Baiyu Shi, Huang Huang, Lawrence Yunliang Chen, Masoud Moghani, Qinxi Yu, Kush Hari, Thomas Low, Danyal Fer, Animesh Garg, Ken Goldberg
{"title":"Robot-Assisted Vascular Shunt Insertion with the dVRK Surgical Robot","authors":"Karthik Dharmarajan, Will Panitch, Baiyu Shi, Huang Huang, Lawrence Yunliang Chen, Masoud Moghani, Qinxi Yu, Kush Hari, Thomas Low, Danyal Fer, Animesh Garg, Ken Goldberg","doi":"10.1142/s2424905x23400068","DOIUrl":"https://doi.org/10.1142/s2424905x23400068","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"89 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135868971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robot Learning Incorporating Human Interventions in the Real World for Autonomous Surgical Endoscopic Camera Control","authors":"Yafei Ou, Sadra Zargarzadeh, Mahdi Tavakoli","doi":"10.1142/s2424905x23400044","DOIUrl":"https://doi.org/10.1142/s2424905x23400044","url":null,"abstract":"Recent studies in surgical robotics have focused on automating common surgical subtasks such as grasping and manipulation using deep reinforcement learning (DRL). In this work, we consider surgical endoscopic camera control for object tracking – e.g., using the endoscopic camera manipulator (ECM) from the da Vinci Research Kit (dVRK) (Intuitive Inc., Sunnyvale, CA, USA) – as a typical surgical robot learning task. A DRL policy for controlling the robot joint space movements is first trained in a simulation environment and then continues the learning in the real world. To speed up training and avoid significant failures (in this case, losing view of the object), human interventions are incorporated into the training process and regular DRL is combined with generative adversarial imitation learning (GAIL) to encourage imitating human behaviors. Experiments show that an average reward of 159.8 can be achieved within 1,000 steps compared to only 121.8 without human interventions, and the view of the moving object is lost only twice during the training process out of 3 trials. These results show that human interventions can improve learning speed and significantly reduce failures during the training process.","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135918007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ziheng Wang, Xi Liu, Conor Perreault, Anthony Jarc
{"title":"Automatic Detection of Out-of-body Frames in Surgical Videos for Privacy Protection Using Self-supervised Learning and Minimal Labels","authors":"Ziheng Wang, Xi Liu, Conor Perreault, Anthony Jarc","doi":"10.1142/s2424905x23500022","DOIUrl":"https://doi.org/10.1142/s2424905x23500022","url":null,"abstract":"Endoscopic video recordings are widely used in minimally invasive robot-assisted surgery, but when the endoscope is outside the patient’s body, it can capture irrelevant segments that may contain sensitive information. To address this, we propose a framework that accurately detects out-of-body frames in surgical videos by leveraging self-supervision with minimal data labels. We use a massive amount of unlabeled endoscopic images to learn meaningful representations in a self-supervised manner. Our approach, which involves pre-training on an auxiliary task and fine-tuning with limited supervision, outperforms previous methods for detecting out-of-body frames in surgical videos captured from da Vinci X and Xi surgical systems. The average F1 scores range from [Formula: see text] to [Formula: see text]. Remarkably, using only [Formula: see text] of the training labels, our approach still maintains an average F1 score performance above 97, outperforming fully-supervised methods with [Formula: see text] fewer labels. These results demonstrate the potential of our framework to facilitate the safe handling of surgical video recordings and enhance data privacy protection in minimally invasive surgery.","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"330 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136265518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Blayton Padasdao, Samuel Lafreniere, Mahsa Rabiei, Zolboo Batsaikhan, Bardia Konh
{"title":"Teleoperated and Automated Control of a Robotic Tool for Targeted Prostate Biopsy.","authors":"Blayton Padasdao, Samuel Lafreniere, Mahsa Rabiei, Zolboo Batsaikhan, Bardia Konh","doi":"10.1142/s2424905x23400020","DOIUrl":"10.1142/s2424905x23400020","url":null,"abstract":"<p><p>This work presents a robotic tool with bidirectional manipulation and control capabilities for targeted prostate biopsy interventions. Targeted prostate biopsy is an effective image-guided technique that results in detection of significant cancer with fewer cores and lower number of unnecessary biopsies compared to systematic biopsy. The robotic tool comprises of a compliant flexure section fabricated on a nitinol tube that enables bidirectional bending via actuation of two internal tendons, and a biopsy mechanism for extraction of tissue samples. The kinematic and static models of the compliant flexure section, as well as teleoperated and automated control of the robotic tool are presented and validated with experiments. It was shown that the controller can force the tip of the robotic tool to follow sinusoidal set-point positions with reasonable accuracy in air and inside a phantom tissue. Finally, the capability of the robotic tool to bend, reach targeted positions inside a phantom tissue, and extract a biopsy sample is evaluated.</p>","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"8 1-amp 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10513146/pdf/nihms-1878856.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41164457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Author Index Volume 7 (2022)","authors":"","doi":"10.1142/s2424905x2299001x","DOIUrl":"https://doi.org/10.1142/s2424905x2299001x","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48445321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Zheng, Grey Leonard, Herbert Zeh, Ann Majewicz Fey
{"title":"Determining the Significant Kinematic Features for Characterizing Stress during Surgical Tasks Using Spatial Attention.","authors":"Yi Zheng, Grey Leonard, Herbert Zeh, Ann Majewicz Fey","doi":"10.1142/s2424905x22410069","DOIUrl":"10.1142/s2424905x22410069","url":null,"abstract":"<p><p>It has been shown that intraoperative stress can have a negative effect on surgeon surgical skills during laparoscopic procedures. For novice surgeons, stressful conditions can lead to significantly higher velocity, acceleration, and jerk of the surgical instrument tips, resulting in faster but less smooth movements. However, it is still not clear which of these kinematic features (velocity, acceleration, or jerk) is the best marker for identifying the normal and stressed conditions. Therefore, in order to find the most significant kinematic feature that is affected by intraoperative stress, we implemented a spatial attention-based Long-Short-Term-Memory (LSTM) classifier. In a prior IRB approved experiment, we collected data from medical students performing an extended peg transfer task who were randomized into a control group and a group performing the task under external psychological stresses. In our prior work, we obtained \"representative\" normal or stressed movements from this dataset using kinematic data as the input. In this study, a spatial attention mechanism is used to describe the contribution of each kinematic feature to the classification of normal/stressed movements. We tested our classifier under Leave-One-User-Out (LOUO) cross-validation, and the classifier reached an overall accuracy of 77.11% for classifying \"representative\" normal and stressed movements using kinematic features as the input. More importantly, we also studied the spatial attention extracted from the proposed classifier. Velocity and acceleration on both sides had significantly higher attention for classifying a normal movement (<i>p</i> <= 0.0001); Velocity (<i>p</i> <= 0.015) and jerk (<i>p</i> <= 0.001) on non-dominant hand had significant higher attention for classifying a stressed movement, and it is worthy noting that the attention of jerk on non-dominant hand side had the largest increment when moving from describing normal movements to stressed movements (<i>p</i> = 0.0000). In general, we found that the jerk on non-dominant hand side can be used for characterizing the stressed movements for novice surgeons more effectively.</p>","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"7 2-3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10289589/pdf/nihms-1903565.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9706151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of a 6-DoF Parallel Robotic Platform for MRI Applications.","authors":"Mishek Musa, Saikat Sengupta, Yue Chen","doi":"10.1142/s2424905x22410057","DOIUrl":"10.1142/s2424905x22410057","url":null,"abstract":"<p><p>In this work, the design, analysis, and characterization of a parallel robotic motion generation platform with 6-degrees of freedom (DoF) for magnetic resonance imaging (MRI) applications are presented. The motivation for the development of this robot is the need for a robotic platform able to produce accurate 6-DoF motion inside the MRI bore to serve as the ground truth for motion modeling; other applications include manipulation of interventional tools such as biopsy and ablation needles and ultrasound probes for therapy and neuromodulation under MRI guidance. The robot is comprised of six pneumatic cylinder actuators controlled via a robust sliding mode controller. Tracking experiments of the pneumatic actuator indicates that the system is able to achieve an average error of 0.69 ± 0.14 mm and 0.67 ± 0.40 mm for step signal tracking and sinusoidal signal tracking, respectively. To demonstrate the feasibility and potential of using the proposed robot for minimally invasive procedures, a phantom experiment was performed in the benchtop environment, which showed a mean positional error of 1.20 ± 0.43 mm and a mean orientational error of 1.09 ± 0.57°, respectively. Experiments conducted in a 3T whole body human MRI scanner indicate that the robot is MRI compatible and capable of achieving positional error of 1.68 ± 0.31 mm and orientational error of 1.51 ± 0.32° inside the scanner, respectively. This study demonstrates the potential of this device to enable accurate 6-DoF motions in the MRI environment.</p>","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"7 2-3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10445425/pdf/nihms-1918436.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10104453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Author Index Volume 6 (2021)","authors":"","doi":"10.1142/s2424905x21990014","DOIUrl":"https://doi.org/10.1142/s2424905x21990014","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42268753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}