Wei Meng;Zunmei Tian;Chang Zhu;Qingsong Ai;Quan Liu
{"title":"Optimized Impedance Control of a Lightweight Gait Rehabilitation Exoskeleton Based on Accurate Knee Joint Torque Estimation","authors":"Wei Meng;Zunmei Tian;Chang Zhu;Qingsong Ai;Quan Liu","doi":"10.1109/TMRB.2024.3464671","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464671","url":null,"abstract":"In recent years, with the increasing problem of an aging population, there has been a significant increase in the number of stroke patients presenting with motor dysfunction of the lower limbs. In this study, a knee exoskeleton rehabilitation robot driven by a quasi-direct driver actuator is designed. The torque generation model is constructed based on the TCN-LSTM hybrid neural network, and the knee joint torque is generated by sEMG and angle signal. A joint attention mechanism is introduced to enhance the accuracy of torque generation model. The impedance control parameters are adaptively adjusted in accordance with the joint torque. The experimental results demonstrate that the TCN-LSTM hybrid neural network is capable of effectively estimating torque, the mean MAE and CC of the proposed model are 1.141Nm and 93.7%, respectively. The optimized impedance control can optimize the initial value of the impedance parameter, which reduced the torque error by 5.54% and 50.64% at uphill tasks and walking task, respectively, and adaptively adjust the impedance parameter to ensure the coordination of the gait rehabilitation and the friendly human-robot interaction.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Colley;Collin D. Bowersock;Zachary F. Lerner
{"title":"A Lightweight Powered Elbow Exoskeleton for Manual Handling Tasks","authors":"Daniel Colley;Collin D. Bowersock;Zachary F. Lerner","doi":"10.1109/TMRB.2024.3464690","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464690","url":null,"abstract":"This study introduces a novel lightweight elbow joint exoskeleton designed to enhance the safety and efficiency of industrial workers engaged in manual handling tasks. Our design leveraged a Bowden cable transmission system and a practical control strategy utilizing instrumented gloves to deliver reactive bi-directional support for dynamic box lifting and pressing activities. The primary focus of this work was to (1) to present an engineering validation analysis and (2) assess the exoskeleton’s impact on reducing muscle activity, increasing endurance, and maintaining overall user comfort during upper-extremity lifting or carrying tasks. We observed significant and consistent reductions in muscle activity and an increase in endurance (e.g., 2.4x more repetitions) during box lifting tasks, without compromising user comfort. These findings provide promising evidence of the exoskeleton’s effectiveness and represent a crucial first step working towards demonstrating efficacy in real-world workplace environments.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deepak Raina;Mythra V. Balakuntala;Byung Wook Kim;Juan Wachs;Richard Voyles
{"title":"Coaching a Robotic Sonographer: Learning Robotic Ultrasound With Sparse Expert’s Feedback","authors":"Deepak Raina;Mythra V. Balakuntala;Byung Wook Kim;Juan Wachs;Richard Voyles","doi":"10.1109/TMRB.2024.3464698","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464698","url":null,"abstract":"Ultrasound is widely employed for clinical intervention and diagnosis, due to its advantages of offering non-invasive, radiation-free, and real-time imaging. However, the accessibility of this dexterous procedure is limited due to the substantial training and expertise required of operators. The robotic ultrasound (RUS) offers a viable solution to address this limitation; nonetheless, achieving human-level proficiency remains challenging. Learning from demonstrations (LfD) methods have been explored in RUS, which learns the policy prior from a dataset of offline demonstrations to encode the mental model of the expert sonographer. However, active engagement of experts, i.e., Coaching, during the training of RUS has not been explored thus far. Coaching is known for enhancing efficiency and performance in human training. This paper proposes a coaching framework for RUS to amplify its performance. The framework combines DRL (self-supervised practice) with sparse expert’s feedback through coaching. The DRL employs an off-policy Soft Actor-Critic (SAC) network, with a reward based on image quality rating. The coaching by experts is modeled as a Partially Observable Markov Decision Process (POMDP), which updates the policy parameters based on the correction by the expert. The validation study on phantoms showed that coaching increases the learning rate by 25% and the number of high-quality image acquisition by 74.5%.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Connolly;Aravind S. Kumar;Kapi Ketan Mehta;Lidia Al-Zogbi;Peter Kazanzides;Parvin Mousavi;Gabor Fichtinger;Axel Krieger;Junichi Tokuda;Russell H. Taylor;Simon Leonard;Anton Deguet
{"title":"SlicerROS2: A Research and Development Module for Image-Guided Robotic Interventions","authors":"Laura Connolly;Aravind S. Kumar;Kapi Ketan Mehta;Lidia Al-Zogbi;Peter Kazanzides;Parvin Mousavi;Gabor Fichtinger;Axel Krieger;Junichi Tokuda;Russell H. Taylor;Simon Leonard;Anton Deguet","doi":"10.1109/TMRB.2024.3464683","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464683","url":null,"abstract":"Image-guided robotic interventions involve the use of medical imaging in tandem with robotics. SlicerROS2 is a software module that combines 3D Slicer and robot operating system (ROS) in pursuit of a standard integration approach for medical robotics research. The first release of SlicerROS2 demonstrated the feasibility of using the C++ API from 3D Slicer and ROS to load and visualize robots in real time. Since this initial release, we’ve rewritten and redesigned the module to offer greater modularity, access to low-level features, access to 3D Slicer’s Python API, and better data transfer protocols. In this paper, we introduce this new design as well as four applications that leverage the core functionalities of SlicerROS2 in realistic image-guided robotics scenarios.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast OCT-Based Needle Tracking for Retinal Microsurgery Using Dynamic Spiral Scanning","authors":"Pengwei Xu;Mouloud Ourak;Gianni Borghesan;Emmanuel Vander Poorten","doi":"10.1109/TMRB.2024.3464693","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464693","url":null,"abstract":"Retinal microsurgery is crucial for treating various ocular diseases, but challenging due to the structure size, physiological tremor and limited depth perception. This study aims to develop an innovative real-time needle tracking system that utilizes only a small amount of Optical Coherence Tomography (OCT) A-scans. We introduce a spiral scanning pattern, that is dynamically updated to efficiently capture the needle tip and the retina area with 2000 A-scans. An imaging pipeline is proposed that initiates with an initial Region of Interest (ROI) identification, followed by image segmentation, 3D reconstruction, and needle pose estimation. The ROI is dynamically adjusted to keep the needle tip centrally within the spiral scan, facilitating tracking at clinically relevant speeds. Preliminary testing on phantom eye models demonstrated that our system can maintain an average tracking error of 0.04 mm in spatial coordinates and an error of 0.06 mm in estimating the distance between the needle tip and the retina. These results suggest the system’s potential to enhance surgical outcomes by providing surgeons with improved depth perception and precise, real-time feedback. By efficiently utilizing spirally sampled OCT data, this system sets the groundwork for future integrations of real-time 4D imaging and physiological motion detection capabilities.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jixiu Li;Truman Cheng;Wai Shing Chan;Zixiao Chen;Yehui Li;Calvin Sze Hang Ng;Philip Wai Yan Chiu;Zheng Li
{"title":"A Scorpion-Inspired 5-DOF Miniature Remote Actuation Robotic Endoscope for Minimally Invasive Surgery","authors":"Jixiu Li;Truman Cheng;Wai Shing Chan;Zixiao Chen;Yehui Li;Calvin Sze Hang Ng;Philip Wai Yan Chiu;Zheng Li","doi":"10.1109/TMRB.2024.3464114","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464114","url":null,"abstract":"Remote Actuation Mechanisms (RAMs) play a vital role in minimally invasive surgery (MIS) by providing motion capabilities within limited spaces. This paper first focused on analyzing commonly employed RAMs to understand their strengths and limitations. Then, drawing inspiration from bionics and the biological structure of scorpions, we proposed a novel approach by integrating three RAMs-a magnet pair, a torque coil, and a soft bellow-to create a 5-degree-of-freedom (5-DOF) miniature remote actuation robot. In the design phase, we established the robot’s parameters using the magnetic dipole model and related constraints. A functional prototype of the robot, along with an external controller and user interface, was fabricated and assembled. Experimental investigations demonstrated motion performance across the 5 DOF, validating the robot’s feasibility. To assess the practicality of the system, the interaction interface was evaluated under controlled laboratory conditions and through a cadaver test. In conclusion, our innovative approach combines multiple RAMs into a 5-DOF remote actuation robot. Comprehensive tests validated its motion capabilities and highlighted its potential to advance MIS procedures.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advancements and Challenges in the Development of Robotic Lower Limb Prostheses: A Systematic Review","authors":"Ilaria Fagioli;Alessandro Mazzarini;Chiara Livolsi;Emanuele Gruppioni;Nicola Vitiello;Simona Crea;Emilio Trigili","doi":"10.1109/TMRB.2024.3464126","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464126","url":null,"abstract":"Lower limb prosthetics, essential for restoring mobility in individuals with limb loss, have witnessed significant advancements in recent years. This systematic review reports the recent research advancements in the field of semi-active and active lower limb prostheses. The review focuses on the mechatronic features of the devices, the sensing and control strategies, and the performance verification with end-users. A total of 53 prosthetic prototypes were identified and analyzed, including 16 knee-ankle prostheses, 18 knee prostheses, and 19 ankle prostheses. The review highlights some of the open challenges in the field of prosthetic research.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10684266","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abed Soleymani;Mahdi Tavakoli;Farzad Aghazadeh;Yafei Ou;Hossein Rouhani;Bin Zheng;Xingyu Li
{"title":"Hands Collaboration Evaluation for Surgical Skills Assessment: An Information Theoretical Approach","authors":"Abed Soleymani;Mahdi Tavakoli;Farzad Aghazadeh;Yafei Ou;Hossein Rouhani;Bin Zheng;Xingyu Li","doi":"10.1109/TMRB.2024.3464110","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464110","url":null,"abstract":"Bimanual tasks, where the brain must simultaneously control and plan the movements of both hands, such as needle passing and tissue cutting, commonly exist in surgeries, e.g., robot-assisted minimally invasive surgery. In this study, we present a novel approach for quantifying the quality of hands coordination and correspondence in bimanual tasks by utilizing information theory concepts to build a mathematical framework for measuring the collaboration strength between the two hands. The introduced method makes no assumption about the dynamics and couplings within the robotic platform, executive task, or human motor control. We implemented the proposed approach on MEELS and JIGSAWS datasets, corresponding to conventional minimally invasive surgery (MIS) and robot-assisted MIS, respectively. We analyzed the advantages of hands collaboration features in the skills assessment and style recognition of robotic surgery tasks. Furthermore, we demonstrated that incorporating intuitive domain knowledge of bimanual tasks potentially paves the way for other complex applications, including, but not limited to, autonomous surgery with a high level of model explainability and interpretability. Finally, we presented preliminary results to argue that incorporating hands collaboration features in deep learning-based classifiers reduces uncertainty, improves accuracy, and enhances the out-of-distribution robustness of the final model.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ke Fan;Ziyang Chen;Qiaoling Liu;Giancarlo Ferrigno;Elena De Momi
{"title":"A Reinforcement Learning Approach for Real-Time Articulated Surgical Instrument 3-D Pose Reconstruction","authors":"Ke Fan;Ziyang Chen;Qiaoling Liu;Giancarlo Ferrigno;Elena De Momi","doi":"10.1109/TMRB.2024.3464089","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464089","url":null,"abstract":"3D pose reconstruction of surgical instruments from images stands as a critical component in environment perception within robotic minimally invasive surgery (RMIS). The current deep learning methods rely on complex networks to enhance accuracy, making real-time implementation difficult. Moreover, diverging from a singular rigid body, surgical instruments exhibit an articulation structure, making the annotation of 3D poses more challenging. In this paper, we present a novel approach to formulate the 3D pose reconstruction of articulated surgical instruments as a Markov Decision Process (MDP). A Reinforcement Learning (RL) agent employs 2D image labels to control a virtual articulated skeleton to reproduce the 3D pose of the real surgical instrument. Firstly, a convolutional neural network is used to estimate the 2D pixel positions of joint nodes of the surgical instrument skeleton. Subsequently, the agent controls the 3D virtual articulated skeleton to align its joint nodes’ projections on the image plane with those in the real image. Validation of our proposed method is conducted using a semi-synthetic dataset with precise 3D pose labels and two real datasets, demonstrating the accuracy and efficacy of our approach. The results indicate the potential of our method in achieving real-time 3D pose reconstruction for articulated surgical instruments in the context of RMIS, addressing the challenges posed by low-texture surfaces and articulated structures.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. C. Vijayan;N. M. Sheth;J. Wei;K. Venkataraman;D. Ghanem;B. Shafiq;J. H. Siewerdsen;W. Zbijewski;G. Li;K. Cleary;A. Uneri
{"title":"Robot-Assisted Reduction of the Ankle Joint via Multi-Body 3D–2D Image Registration","authors":"R. C. Vijayan;N. M. Sheth;J. Wei;K. Venkataraman;D. Ghanem;B. Shafiq;J. H. Siewerdsen;W. Zbijewski;G. Li;K. Cleary;A. Uneri","doi":"10.1109/TMRB.2024.3464095","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464095","url":null,"abstract":"Robot-assisted orthopaedic joint reduction offers enhanced precision and control across multiple axes of motion, enabling precise realignment according to predefined plans. However, the high levels of forces encountered may induce unintended anatomical motion and flex mechanical components. To address this, this work presents an approach that uses 2D fluoroscopic imaging to verify and readjust the 3D reduction path by tracking deviations from the planned trajectory. The proposed method involves a 3D-2D registration algorithm using a pair of fluoroscopic images, along with prior models of each body in the radiographic scene. This objective is formulated to couple and constrain multiple object poses (fibula, tibia, talus, and robot end effector), and incorporate novel methods for automatic view and hyperparameter selection to improve robustness. The algorithms were refined through cadaver studies and evaluated in a preclinical trial, employing a robotic system to manipulate a dislocated fibula. Studies with cadaveric specimens highlighted the joint-specific formulation’s high registration accuracy (\u0000<inline-formula> <tex-math>$Delta _{x} {=} 0.3~pm ~1$ </tex-math></inline-formula>\u0000.5 mm), further improved with the use of automatic view and hyperparameter selection (\u0000<inline-formula> <tex-math>$Delta _{x} {=} 0.2~pm ~0$ </tex-math></inline-formula>\u0000.8 mm). Preclinical studies demonstrated a high deviation between the intended and the actual path of the robotic system, which was accurately captured (\u0000<inline-formula> <tex-math>$Delta _{x}$ </tex-math></inline-formula>\u0000 1 mm) using the proposed techniques. The solution offers to close the loop on image-based guidance of robot-assisted joint reduction by tracking the robot and bones to dynamically correct the course. The approach uses standard clinical images and is expected to lower radiation exposure by providing 3D information and allowing the staff to stay clear of the x-ray beam.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}