Healthcare Technology Letters最新文献

筛选
英文 中文
Virtual reality-based preoperative planning for optimized trocar placement in thoracic surgery: A preliminary study 基于虚拟现实的胸外科套管针放置优化术前计划:初步研究。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-11 DOI: 10.1049/htl2.12114
Arash Harirpoush, George Rakovich, Marta Kersten-Oertel, Yiming Xiao
{"title":"Virtual reality-based preoperative planning for optimized trocar placement in thoracic surgery: A preliminary study","authors":"Arash Harirpoush,&nbsp;George Rakovich,&nbsp;Marta Kersten-Oertel,&nbsp;Yiming Xiao","doi":"10.1049/htl2.12114","DOIUrl":"10.1049/htl2.12114","url":null,"abstract":"<p>Video-assisted thoracic surgery (VATS) is a minimally invasive approach for treating early-stage non-small-cell lung cancer. Optimal trocar placement during VATS ensures comprehensive access to the thoracic cavity, provides a panoramic endoscopic view, and prevents instrument crowding. While established principles such as the Baseball Diamond Principle (BDP) and Triangle Target Principle (TTP) exist, surgeons mainly rely on experience and patient-specific anatomy for trocar placement, potentially leading to sub-optimal surgical plans that increase operative time and fatigue. To address this, the authors present the first virtual reality (VR)-based pre-operative planning tool with tailored data visualization and interaction designs for efficient and optimal VATS trocar placement, following the established surgical principles and consultation with an experienced surgeon. In the preliminary study, the system's application in right upper lung lobectomy is demonstrated, a common thoracic procedure typically using three trocars. A preliminary user study of the system indicates it is efficient, robust, and user-friendly for planning optimal trocar placement, with a great promise for clinical application while offering potentially valuable insights for the development of other surgical VR systems.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"418-426"},"PeriodicalIF":2.8,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665775/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AR-assisted surgery: Precision placement of patient specific hip implants based on 3D printed PSIs ar辅助手术:基于3D打印psi的患者特定髋关节植入物的精确放置。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-10 DOI: 10.1049/htl2.12112
Amaia Iribar-Zabala, Tanya Fernández-Fernández, Javier Orozco-Martínez, José Calvo-Haro, Rubén Pérez-Mañanes, Elena Aguilera-Jiménez, Carla de Gregorio-Bermejo, Rafael Benito, Alicia Pose-Díez-de-la-Lastra, Andoni Beristain-Iraola, Javier Pascau, Mónica García-Sevilla
{"title":"AR-assisted surgery: Precision placement of patient specific hip implants based on 3D printed PSIs","authors":"Amaia Iribar-Zabala,&nbsp;Tanya Fernández-Fernández,&nbsp;Javier Orozco-Martínez,&nbsp;José Calvo-Haro,&nbsp;Rubén Pérez-Mañanes,&nbsp;Elena Aguilera-Jiménez,&nbsp;Carla de Gregorio-Bermejo,&nbsp;Rafael Benito,&nbsp;Alicia Pose-Díez-de-la-Lastra,&nbsp;Andoni Beristain-Iraola,&nbsp;Javier Pascau,&nbsp;Mónica García-Sevilla","doi":"10.1049/htl2.12112","DOIUrl":"10.1049/htl2.12112","url":null,"abstract":"<p>Patient-specific implant placement in the case of pelvic tumour resection is usually a complex procedure, where the planned optimal position of the prosthesis may differ from the final location. This discrepancy arises from incorrect or differently executed bone resection and improper final positioning of the prosthesis. In order to overcome such mismatch, a navigation solution is presented based on an augmented reality application for HoloLens 2 to assist the entire procedure. This involves placing patient-specific instruments for tumour resection guidance in the supraacetabular, ischial and symphysial regions, performing the osteotomy and assisting within the adequate positioning of the implant. The supraacetabular patient-specific instrument and the prosthesis included optical markers attached to them to be used as reference for surgical guidance. The proposed application and workflow were validated by two clinicians on six phantoms, designed and fabricated from different cadaver specimens. The accuracy of the solution was evaluated by comparing the final position after navigation with the position defined in the surgical plan. Preliminary assessment shows promising results for the guidance system, with positive clinician feedback.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"402-410"},"PeriodicalIF":2.8,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BREAST+: An augmented reality interface that speeds up perforator marking for DIEAP flap reconstruction surgery BREAST+:增强现实界面,加速DIEAP皮瓣重建手术的穿支标记。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-10 DOI: 10.1049/htl2.12095
Rafaela Timóteo, David Pinto, Pedro Matono, Carlos Mavioso, Maria-João Cardoso, Pedro Gouveia, Tiago Marques, Daniel Simões Lopes
{"title":"BREAST+: An augmented reality interface that speeds up perforator marking for DIEAP flap reconstruction surgery","authors":"Rafaela Timóteo,&nbsp;David Pinto,&nbsp;Pedro Matono,&nbsp;Carlos Mavioso,&nbsp;Maria-João Cardoso,&nbsp;Pedro Gouveia,&nbsp;Tiago Marques,&nbsp;Daniel Simões Lopes","doi":"10.1049/htl2.12095","DOIUrl":"10.1049/htl2.12095","url":null,"abstract":"<p>Deep inferior epigastric artery perforator flap reconstruction is a common technique for breast reconstruction surgery in cancer patients. Preoperative planning typically depends on radiological reports and 2D images to help surgeons locate abdominal perforator vessels before surgery. Here, BREAST+, an augmented reality interface for the HoloLens 2, designed to facilitate accurate marking of perforator locations on the patients' skin and to seamlessly access relevant clinical data in the operating room is proposed. The system is evaluated in a controlled setting by conducting a user study with 27 medical students and 2 breast surgeons. Quantitative (marking error, task completion time, and number of task repetitions) and qualitative (perceived usability, perceived workload, user preference and user satisfaction) data are collected to assess BREAST+ performance during perforator marking. The average time taken to mark each perforator is 7.7 ± 6.5 s, with an average absolute error of 6.8 ± 2.6 mm and an estimated average deviation of 3.6 ± 1.4 mm. The results revealed non-negligeable biases in user estimates likely attributed to depth perception inaccuracies. Still, the study concluded that BREAST+ is both accurate and considerably more efficient (∼6 times faster) when compared to the conventional perforator marking approach.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"301-306"},"PeriodicalIF":2.8,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665790/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intraoperative patient-specific volumetric reconstruction and 3D visualization for laparoscopic liver surgery 腹腔镜肝脏手术中患者特异性体积重建和三维可视化。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-09 DOI: 10.1049/htl2.12106
Luca Boretto, Egidijus Pelanis, Alois Regensburger, Kaloian Petkov, Rafael Palomar, Åsmund Avdem Fretland, Bjørn Edwin, Ole Jakob Elle
{"title":"Intraoperative patient-specific volumetric reconstruction and 3D visualization for laparoscopic liver surgery","authors":"Luca Boretto,&nbsp;Egidijus Pelanis,&nbsp;Alois Regensburger,&nbsp;Kaloian Petkov,&nbsp;Rafael Palomar,&nbsp;Åsmund Avdem Fretland,&nbsp;Bjørn Edwin,&nbsp;Ole Jakob Elle","doi":"10.1049/htl2.12106","DOIUrl":"10.1049/htl2.12106","url":null,"abstract":"<p>Despite the benefits of minimally invasive surgery, interventions such as laparoscopic liver surgery present unique challenges, like the significant anatomical differences between preoperative images and intraoperative scenes due to pneumoperitoneum, patient pose, and organ manipulation by surgical instruments. To address these challenges, a method for intraoperative three-dimensional reconstruction of the surgical scene, including vessels and tumors, without altering the surgical workflow, is proposed. The technique combines neural radiance field reconstructions from tracked laparoscopic videos with ultrasound three-dimensional compounding. The accuracy of our reconstructions on a clinical laparoscopic liver ablation dataset, consisting of laparoscope and patient reference posed from optical tracking, laparoscopic and ultrasound videos, as well as preoperative and intraoperative computed tomographies, is evaluated. The authors propose a solution to compensate for liver deformations due to pressure applied during ultrasound acquisitions, improving the overall accuracy of the three-dimensional reconstructions compared to the ground truth intraoperative computed tomography with pneumoperitoneum. A unified neural radiance field from the ultrasound and laparoscope data, which allows real-time view synthesis providing surgeons with comprehensive intraoperative visual information for laparoscopic liver surgery, is trained.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"374-383"},"PeriodicalIF":2.8,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FBG-driven simulation for virtual augmentation of fluoroscopic images during endovascular interventions 在血管内介入期间,fbg驱动的虚拟增强透视图像模拟。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-07 DOI: 10.1049/htl2.12108
Valentina Scarponi, Juan Verde, Nazim Haouchine, Michel Duprez, Florent Nageotte, Stéphane Cotin
{"title":"FBG-driven simulation for virtual augmentation of fluoroscopic images during endovascular interventions","authors":"Valentina Scarponi,&nbsp;Juan Verde,&nbsp;Nazim Haouchine,&nbsp;Michel Duprez,&nbsp;Florent Nageotte,&nbsp;Stéphane Cotin","doi":"10.1049/htl2.12108","DOIUrl":"10.1049/htl2.12108","url":null,"abstract":"<p>Endovascular interventions are procedures designed to diagnose and treat vascular diseases, using catheters to navigate inside arteries and veins. Thanks to their minimal invasiveness, they offer many benefits, such as reduced pain and hospital stays, but also present many challenges for clinicians, as they require specialized training and heavy use of X-rays. This is particularly relevant when accessing (i.e. cannulating) small arteries with steep angles, such as most aortic branches. To address this difficulty, a novel solution that enhances fluoroscopic 2D images in real-time by displaying virtual configurations of the catheter and guidewire is proposed. In contrast to existing works, proposing either simulators or simple augmented reality frameworks, this approach involves a predictive simulation showing the resulting shape of the catheter after guidewire withdrawal without requiring the clinician to perform this task. This system demonstrated accurate prediction with a mean 3D error of 2.4 <span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math> 1.3 mm and a mean error of 1.1 <span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math> 0.7 mm on the fluoroscopic image plane between the real catheter shape after guidewire withdrawal and the predicted shape. A user study reported an average intervention time reduction of 56<span></span><math>\u0000 <semantics>\u0000 <mo>%</mo>\u0000 <annotation>$%$</annotation>\u0000 </semantics></math> when adopting this system, resulting in a lower X-ray exposure.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"392-401"},"PeriodicalIF":2.8,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665791/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StraightTrack: Towards mixed reality navigation system for percutaneous K-wire insertion 直道:面向经皮k线插入的混合现实导航系统。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-07 DOI: 10.1049/htl2.12103
Han Zhang, Benjamin D. Killeen, Yu-Chun Ku, Lalithkumar Seenivasan, Yuxuan Zhao, Mingxu Liu, Yue Yang, Suxi Gu, Alejandro Martin-Gomez,  Taylor, Greg Osgood, Mathias Unberath
{"title":"StraightTrack: Towards mixed reality navigation system for percutaneous K-wire insertion","authors":"Han Zhang,&nbsp;Benjamin D. Killeen,&nbsp;Yu-Chun Ku,&nbsp;Lalithkumar Seenivasan,&nbsp;Yuxuan Zhao,&nbsp;Mingxu Liu,&nbsp;Yue Yang,&nbsp;Suxi Gu,&nbsp;Alejandro Martin-Gomez,&nbsp; Taylor,&nbsp;Greg Osgood,&nbsp;Mathias Unberath","doi":"10.1049/htl2.12103","DOIUrl":"10.1049/htl2.12103","url":null,"abstract":"<p>In percutaneous pelvic trauma surgery, accurate placement of Kirschner wires (K-wires) is crucial to ensure effective fracture fixation and avoid complications due to breaching the cortical bone along an unsuitable trajectory. Surgical navigation via mixed reality (MR) can help achieve precise wire placement in a low-profile form factor. Current approaches in this domain are as yet unsuitable for real-world deployment because they fall short of guaranteeing accurate visual feedback due to uncontrolled bending of the wire. To ensure accurate feedback, StraightTrack, an MR navigation system designed for percutaneous wire placement in complex anatomy, is introduced. StraightTrack features a marker body equipped with a rigid access cannula that mitigates wire bending due to interactions with soft tissue and a covered bony surface. Integrated with an optical see-through head-mounted display capable of tracking the cannula body, StraightTrack offers real-time 3D visualization and guidance without external trackers, which are prone to losing line-of-sight. In phantom experiments with two experienced orthopedic surgeons, StraightTrack improves wire placement accuracy, achieving the ideal trajectory within <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>5.26</mn>\u0000 <mo>±</mo>\u0000 <mn>2.29</mn>\u0000 </mrow>\u0000 <annotation>$5.26 pm 2.29$</annotation>\u0000 </semantics></math>  mm and <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>2.88</mn>\u0000 <mo>±</mo>\u0000 <mn>1.49</mn>\u0000 </mrow>\u0000 <annotation>$2.88 pm 1.49$</annotation>\u0000 </semantics></math><span></span><math>\u0000 <semantics>\u0000 <msup>\u0000 <mrow></mrow>\u0000 <mo>∘</mo>\u0000 </msup>\u0000 <annotation>$^circ$</annotation>\u0000 </semantics></math>, compared to over 12.08  mm and 4.07<span></span><math>\u0000 <semantics>\u0000 <msup>\u0000 <mrow></mrow>\u0000 <mo>∘</mo>\u0000 </msup>\u0000 <annotation>$^circ$</annotation>\u0000 </semantics></math> for comparable methods. As MR navigation systems continue to mature, StraightTrack realizes their potential for internal fracture fixation and other percutaneous orthopedic procedures.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"355-364"},"PeriodicalIF":2.8,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665788/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated surgical skill assessment in endoscopic pituitary surgery using real-time instrument tracking on a high-fidelity bench-top phantom 基于高保真台式假体的实时仪器跟踪在垂体内窥镜手术中的自动手术技能评估。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-02 DOI: 10.1049/htl2.12101
Adrito Das, Bilal Sidiqi, Laurent Mennillo, Zhehua Mao, Mikael Brudfors, Miguel Xochicale, Danyal Z. Khan, Nicola Newall, John G. Hanrahan, Matthew J. Clarkson, Danail Stoyanov, Hani J. Marcus, Sophia Bano
{"title":"Automated surgical skill assessment in endoscopic pituitary surgery using real-time instrument tracking on a high-fidelity bench-top phantom","authors":"Adrito Das,&nbsp;Bilal Sidiqi,&nbsp;Laurent Mennillo,&nbsp;Zhehua Mao,&nbsp;Mikael Brudfors,&nbsp;Miguel Xochicale,&nbsp;Danyal Z. Khan,&nbsp;Nicola Newall,&nbsp;John G. Hanrahan,&nbsp;Matthew J. Clarkson,&nbsp;Danail Stoyanov,&nbsp;Hani J. Marcus,&nbsp;Sophia Bano","doi":"10.1049/htl2.12101","DOIUrl":"10.1049/htl2.12101","url":null,"abstract":"<p>Improved surgical skill is generally associated with improved patient outcomes, although assessment is subjective, labour intensive, and requires domain-specific expertise. Automated data-driven metrics can alleviate these difficulties, as demonstrated by existing machine learning instrument tracking models. However, these models are tested on limited datasets of laparoscopic surgery, with a focus on isolated tasks and robotic surgery. Here, a new public dataset is introduced: the nasal phase of simulated endoscopic pituitary surgery. Simulated surgery allows for a realistic yet repeatable environment, meaning the insights gained from automated assessment can be used by novice surgeons to hone their skills on the simulator before moving to real surgery. Pituitary Real-time INstrument Tracking Network (PRINTNet) has been created as a baseline model for this automated assessment. Consisting of DeepLabV3 for classification and segmentation, StrongSORT for tracking, and the NVIDIA Holoscan for real-time performance, PRINTNet achieved 71.9% multiple object tracking precision running at 22 frames per second. Using this tracking output, a multilayer perceptron achieved 87% accuracy in predicting surgical skill level (novice or expert), with the ‘ratio of total procedure time to instrument visible time’ correlated with higher surgical skill. The new publicly available dataset can be found at https://doi.org/10.5522/04/26511049.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"336-344"},"PeriodicalIF":2.8,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665785/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142884851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibration-Jitter: Augmentation of hyperspectral data for improved surgical scene segmentation 校准抖动:增强高光谱数据以改进手术场景分割。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-11-29 DOI: 10.1049/htl2.12102
Alfie Roddan, Tobias Czempiel, Daniel S. Elson, Stamatia Giannarou
{"title":"Calibration-Jitter: Augmentation of hyperspectral data for improved surgical scene segmentation","authors":"Alfie Roddan,&nbsp;Tobias Czempiel,&nbsp;Daniel S. Elson,&nbsp;Stamatia Giannarou","doi":"10.1049/htl2.12102","DOIUrl":"10.1049/htl2.12102","url":null,"abstract":"<p>Semantic surgical scene segmentation is crucial for accurately identifying and delineating different tissue types during surgery, enhancing outcomes and reducing complications. Hyperspectral imaging provides detailed information beyond visible color filters, offering an enhanced view of tissue characteristics. Combined with machine learning, it supports critical tumor resection decisions. Traditional augmentations fail to effectively train machine learning models on illumination and sensor sensitivity variations. Learning to handle these variations is crucial to enable models to better generalize, ultimately enhancing their reliability in deployment. In this article, <i>Calibration-Jitter</i> is introduced, a spectral augmentation technique that leverages hyperspectral calibration variations to improve predictive performance. Evaluated on scene segmentation on a neurosurgical dataset, <i>Calibration-Jitter</i> achieved a F1-score of 74.35% with SegFormer, surpassing the previous best of 70.2%. This advancement addresses limitations of traditional augmentations, improving hyperspectral imaging segmentation performance.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"345-354"},"PeriodicalIF":2.8,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665780/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Occlusion-robust markerless surgical instrument pose estimation 闭塞鲁棒无标记手术器械位姿估计。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-11-27 DOI: 10.1049/htl2.12100
Haozheng Xu, Stamatia Giannarou
{"title":"Occlusion-robust markerless surgical instrument pose estimation","authors":"Haozheng Xu,&nbsp;Stamatia Giannarou","doi":"10.1049/htl2.12100","DOIUrl":"10.1049/htl2.12100","url":null,"abstract":"<p>The estimation of the pose of surgical instruments is important in Robot-assisted Minimally Invasive Surgery (RMIS) to assist surgical navigation and enable autonomous robotic task execution. The performance of current instrument pose estimation methods deteriorates significantly in the presence of partial tool visibility, occlusions, and changes in the surgical scene. In this work, a vision-based framework is proposed for markerless estimation of the 6DoF pose of surgical instruments. To deal with partial instrument visibility, a keypoint object representation is used and stable and accurate instrument poses are computed using a PnP solver. To boost the learning process of the model under occlusion, a new mask-based data augmentation approach has been proposed. To validate the model, a dataset for instrument pose estimation with highly accurate ground truth data has been generated using different surgical robotic instruments. The proposed network can achieve submillimeter accuracy and the experimental results verify its generalisability to different shapes of occlusion.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"327-335"},"PeriodicalIF":2.8,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665797/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PitRSDNet: Predicting intra-operative remaining surgery duration in endoscopic pituitary surgery PitRSDNet:预测内镜下垂体手术术中剩余手术时间。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-11-25 DOI: 10.1049/htl2.12099
Anjana Wijekoon, Adrito Das, Roxana R. Herrera, Danyal Z. Khan, John Hanrahan, Eleanor Carter, Valpuri Luoma, Danail Stoyanov, Hani J. Marcus, Sophia Bano
{"title":"PitRSDNet: Predicting intra-operative remaining surgery duration in endoscopic pituitary surgery","authors":"Anjana Wijekoon,&nbsp;Adrito Das,&nbsp;Roxana R. Herrera,&nbsp;Danyal Z. Khan,&nbsp;John Hanrahan,&nbsp;Eleanor Carter,&nbsp;Valpuri Luoma,&nbsp;Danail Stoyanov,&nbsp;Hani J. Marcus,&nbsp;Sophia Bano","doi":"10.1049/htl2.12099","DOIUrl":"10.1049/htl2.12099","url":null,"abstract":"<p>Accurate intra-operative Remaining Surgery Duration (RSD) predictions allow for anaesthetists to more accurately decide when to administer anaesthetic agents and drugs, as well as to notify hospital staff to send in the next patient. Therefore, RSD plays an important role in improved patient care and minimising surgical theatre costs via efficient scheduling. In endoscopic pituitary surgery, it is uniquely challenging due to variable workflow sequences with a selection of optional steps contributing to high variability in surgery duration. This article presents PitRSDNet for predicting RSD during pituitary surgery, a spatio-temporal neural network model that learns from historical data focusing on workflow sequences. PitRSDNet integrates workflow knowledge into RSD prediction in two forms: (1) multi-task learning for concurrently predicting step and RSD; and (2) incorporating prior steps as context in temporal learning and inference. PitRSDNet is trained and evaluated on a new endoscopic pituitary surgery dataset with 88 videos to show competitive performance improvements over previous statistical and machine learning methods. The findings also highlight how PitRSDNet improves RSD precision on outlier cases utilising the knowledge of prior steps.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"318-326"},"PeriodicalIF":2.8,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665798/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信