{"title":"PlutoNet: An efficient polyp segmentation network with modified partial decoder and decoder consistency training","authors":"Tugberk Erol, Duygu Sarikaya","doi":"10.1049/htl2.12105","DOIUrl":"10.1049/htl2.12105","url":null,"abstract":"<p>Deep learning models are used to minimize the number of polyps that goes unnoticed by the experts and to accurately segment the detected polyps during interventions. Although state-of-the-art models are proposed, it remains a challenge to define representations that are able to generalize well and that mediate between capturing low-level features and higher-level semantic details without being redundant. Another challenge with these models is that they are computation and memory intensive, which can pose a problem with real-time applications. To address these problems, PlutoNet is proposed for polyp segmentation which requires only 9 FLOPs and 2,626,537 parameters, less than 10% of the parameters required by its counterparts. With PlutoNet, a novel <i>decoder consistency training</i> approach is proposed that consists of a shared encoder, the <i>modified partial decoder</i>, which is a combination of the partial decoder and full-scale connections that capture salient features at different scales without redundancy, and the auxiliary decoder which focuses on higher-level semantic features. The <i>modified partial decoder</i> and the auxiliary decoder are trained with a combined loss to enforce consistency, which helps strengthen learned representations. Ablation studies and experiments are performed which show that PlutoNet performs significantly better than the state-of-the-art models, particularly on unseen datasets.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"365-373"},"PeriodicalIF":2.8,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665777/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beerend G. A. Gerats, Jelmer M. Wolterink, Seb P. Mol, Ivo A. M. J. Broeders
{"title":"Neural fields for 3D tracking of anatomy and surgical instruments in monocular laparoscopic video clips","authors":"Beerend G. A. Gerats, Jelmer M. Wolterink, Seb P. Mol, Ivo A. M. J. Broeders","doi":"10.1049/htl2.12113","DOIUrl":"10.1049/htl2.12113","url":null,"abstract":"<p>Laparoscopic video tracking primarily focuses on two target types: surgical instruments and anatomy. The former could be used for skill assessment, while the latter is necessary for the projection of virtual overlays. Where instrument and anatomy tracking have often been considered two separate problems, in this article, a method is proposed for joint tracking of all structures simultaneously. Based on a single 2D monocular video clip, a neural field is trained to represent a continuous spatiotemporal scene, used to create 3D tracks of all surfaces visible in at least one frame. Due to the small size of instruments, they generally cover a small part of the image only, resulting in decreased tracking accuracy. Therefore, enhanced class weighting is proposed to improve the instrument tracks. The authors evaluate tracking on video clips from laparoscopic cholecystectomies, where they find mean tracking accuracies of 92.4% for anatomical structures and 87.4% for instruments. Additionally, the quality of depth maps obtained from the method's scene reconstructions is assessed. It is shown that these pseudo-depths have comparable quality to a state-of-the-art pre-trained depth estimator. On laparoscopic videos in the SCARED dataset, the method predicts depth with an MAE of 2.9 mm and a relative error of 9.2%. These results show the feasibility of using neural fields for monocular 3D reconstruction of laparoscopic scenes. Code is available via GitHub: https://github.com/Beerend/Surgical-OmniMotion.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"411-417"},"PeriodicalIF":2.8,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665779/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design, development and evaluation of registry software for upper limb disabilities","authors":"Khadijeh Moulaei, Abbas Sheikhtaheri, AliAkbar Haghdoost, Mansour Shahabi Nezhad, Kambiz Bahaadinbeigy","doi":"10.1049/htl2.12115","DOIUrl":"10.1049/htl2.12115","url":null,"abstract":"<p>Upper limb disabilities, if not managed, controlled and treated, significantly affect the physical and mental condition, daily activities and quality of life. Registries can help control and manage and even treat these disabilities by collecting clinical-management data of upper limb disabilities. Therefore, the aim of this study is to design, develop and evaluate a registry system for upper limb disabilities in terms of usability. By having identified data elements in the exploratory phase, we developed our registry software using hypertext preprocessor (PHP) programming language in XAMPP software, version 8.1.10. The content and interface validity of the pre-final version were assessed by 13 experts in the field of medical informatics and health information management. The registry has capabilities to create user profiles, record patient history, clinical records, independence in daily activities, mental health, and treatment processes. It can also generate statistical reports. Participants evaluated the registry's usability as “good” across different dimensions. The registry can help understand upper limb disabilities, improve care, reduce costs and errors, determine incidence and prevalence, evaluate prevention and treatment, and support research and policymaking. The registry can serve as a model for designing registries for other body disabilities.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"496-503"},"PeriodicalIF":2.8,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665789/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arash Harirpoush, George Rakovich, Marta Kersten-Oertel, Yiming Xiao
{"title":"Virtual reality-based preoperative planning for optimized trocar placement in thoracic surgery: A preliminary study","authors":"Arash Harirpoush, George Rakovich, Marta Kersten-Oertel, Yiming Xiao","doi":"10.1049/htl2.12114","DOIUrl":"10.1049/htl2.12114","url":null,"abstract":"<p>Video-assisted thoracic surgery (VATS) is a minimally invasive approach for treating early-stage non-small-cell lung cancer. Optimal trocar placement during VATS ensures comprehensive access to the thoracic cavity, provides a panoramic endoscopic view, and prevents instrument crowding. While established principles such as the Baseball Diamond Principle (BDP) and Triangle Target Principle (TTP) exist, surgeons mainly rely on experience and patient-specific anatomy for trocar placement, potentially leading to sub-optimal surgical plans that increase operative time and fatigue. To address this, the authors present the first virtual reality (VR)-based pre-operative planning tool with tailored data visualization and interaction designs for efficient and optimal VATS trocar placement, following the established surgical principles and consultation with an experienced surgeon. In the preliminary study, the system's application in right upper lung lobectomy is demonstrated, a common thoracic procedure typically using three trocars. A preliminary user study of the system indicates it is efficient, robust, and user-friendly for planning optimal trocar placement, with a great promise for clinical application while offering potentially valuable insights for the development of other surgical VR systems.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"418-426"},"PeriodicalIF":2.8,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665775/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amaia Iribar-Zabala, Tanya Fernández-Fernández, Javier Orozco-Martínez, José Calvo-Haro, Rubén Pérez-Mañanes, Elena Aguilera-Jiménez, Carla de Gregorio-Bermejo, Rafael Benito, Alicia Pose-Díez-de-la-Lastra, Andoni Beristain-Iraola, Javier Pascau, Mónica García-Sevilla
{"title":"AR-assisted surgery: Precision placement of patient specific hip implants based on 3D printed PSIs","authors":"Amaia Iribar-Zabala, Tanya Fernández-Fernández, Javier Orozco-Martínez, José Calvo-Haro, Rubén Pérez-Mañanes, Elena Aguilera-Jiménez, Carla de Gregorio-Bermejo, Rafael Benito, Alicia Pose-Díez-de-la-Lastra, Andoni Beristain-Iraola, Javier Pascau, Mónica García-Sevilla","doi":"10.1049/htl2.12112","DOIUrl":"10.1049/htl2.12112","url":null,"abstract":"<p>Patient-specific implant placement in the case of pelvic tumour resection is usually a complex procedure, where the planned optimal position of the prosthesis may differ from the final location. This discrepancy arises from incorrect or differently executed bone resection and improper final positioning of the prosthesis. In order to overcome such mismatch, a navigation solution is presented based on an augmented reality application for HoloLens 2 to assist the entire procedure. This involves placing patient-specific instruments for tumour resection guidance in the supraacetabular, ischial and symphysial regions, performing the osteotomy and assisting within the adequate positioning of the implant. The supraacetabular patient-specific instrument and the prosthesis included optical markers attached to them to be used as reference for surgical guidance. The proposed application and workflow were validated by two clinicians on six phantoms, designed and fabricated from different cadaver specimens. The accuracy of the solution was evaluated by comparing the final position after navigation with the position defined in the surgical plan. Preliminary assessment shows promising results for the guidance system, with positive clinician feedback.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"402-410"},"PeriodicalIF":2.8,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rafaela Timóteo, David Pinto, Pedro Matono, Carlos Mavioso, Maria-João Cardoso, Pedro Gouveia, Tiago Marques, Daniel Simões Lopes
{"title":"BREAST+: An augmented reality interface that speeds up perforator marking for DIEAP flap reconstruction surgery","authors":"Rafaela Timóteo, David Pinto, Pedro Matono, Carlos Mavioso, Maria-João Cardoso, Pedro Gouveia, Tiago Marques, Daniel Simões Lopes","doi":"10.1049/htl2.12095","DOIUrl":"10.1049/htl2.12095","url":null,"abstract":"<p>Deep inferior epigastric artery perforator flap reconstruction is a common technique for breast reconstruction surgery in cancer patients. Preoperative planning typically depends on radiological reports and 2D images to help surgeons locate abdominal perforator vessels before surgery. Here, BREAST+, an augmented reality interface for the HoloLens 2, designed to facilitate accurate marking of perforator locations on the patients' skin and to seamlessly access relevant clinical data in the operating room is proposed. The system is evaluated in a controlled setting by conducting a user study with 27 medical students and 2 breast surgeons. Quantitative (marking error, task completion time, and number of task repetitions) and qualitative (perceived usability, perceived workload, user preference and user satisfaction) data are collected to assess BREAST+ performance during perforator marking. The average time taken to mark each perforator is 7.7 ± 6.5 s, with an average absolute error of 6.8 ± 2.6 mm and an estimated average deviation of 3.6 ± 1.4 mm. The results revealed non-negligeable biases in user estimates likely attributed to depth perception inaccuracies. Still, the study concluded that BREAST+ is both accurate and considerably more efficient (∼6 times faster) when compared to the conventional perforator marking approach.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"301-306"},"PeriodicalIF":2.8,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665790/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luca Boretto, Egidijus Pelanis, Alois Regensburger, Kaloian Petkov, Rafael Palomar, Åsmund Avdem Fretland, Bjørn Edwin, Ole Jakob Elle
{"title":"Intraoperative patient-specific volumetric reconstruction and 3D visualization for laparoscopic liver surgery","authors":"Luca Boretto, Egidijus Pelanis, Alois Regensburger, Kaloian Petkov, Rafael Palomar, Åsmund Avdem Fretland, Bjørn Edwin, Ole Jakob Elle","doi":"10.1049/htl2.12106","DOIUrl":"10.1049/htl2.12106","url":null,"abstract":"<p>Despite the benefits of minimally invasive surgery, interventions such as laparoscopic liver surgery present unique challenges, like the significant anatomical differences between preoperative images and intraoperative scenes due to pneumoperitoneum, patient pose, and organ manipulation by surgical instruments. To address these challenges, a method for intraoperative three-dimensional reconstruction of the surgical scene, including vessels and tumors, without altering the surgical workflow, is proposed. The technique combines neural radiance field reconstructions from tracked laparoscopic videos with ultrasound three-dimensional compounding. The accuracy of our reconstructions on a clinical laparoscopic liver ablation dataset, consisting of laparoscope and patient reference posed from optical tracking, laparoscopic and ultrasound videos, as well as preoperative and intraoperative computed tomographies, is evaluated. The authors propose a solution to compensate for liver deformations due to pressure applied during ultrasound acquisitions, improving the overall accuracy of the three-dimensional reconstructions compared to the ground truth intraoperative computed tomography with pneumoperitoneum. A unified neural radiance field from the ultrasound and laparoscope data, which allows real-time view synthesis providing surgeons with comprehensive intraoperative visual information for laparoscopic liver surgery, is trained.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"374-383"},"PeriodicalIF":2.8,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valentina Scarponi, Juan Verde, Nazim Haouchine, Michel Duprez, Florent Nageotte, Stéphane Cotin
{"title":"FBG-driven simulation for virtual augmentation of fluoroscopic images during endovascular interventions","authors":"Valentina Scarponi, Juan Verde, Nazim Haouchine, Michel Duprez, Florent Nageotte, Stéphane Cotin","doi":"10.1049/htl2.12108","DOIUrl":"10.1049/htl2.12108","url":null,"abstract":"<p>Endovascular interventions are procedures designed to diagnose and treat vascular diseases, using catheters to navigate inside arteries and veins. Thanks to their minimal invasiveness, they offer many benefits, such as reduced pain and hospital stays, but also present many challenges for clinicians, as they require specialized training and heavy use of X-rays. This is particularly relevant when accessing (i.e. cannulating) small arteries with steep angles, such as most aortic branches. To address this difficulty, a novel solution that enhances fluoroscopic 2D images in real-time by displaying virtual configurations of the catheter and guidewire is proposed. In contrast to existing works, proposing either simulators or simple augmented reality frameworks, this approach involves a predictive simulation showing the resulting shape of the catheter after guidewire withdrawal without requiring the clinician to perform this task. This system demonstrated accurate prediction with a mean 3D error of 2.4 <span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math> 1.3 mm and a mean error of 1.1 <span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math> 0.7 mm on the fluoroscopic image plane between the real catheter shape after guidewire withdrawal and the predicted shape. A user study reported an average intervention time reduction of 56<span></span><math>\u0000 <semantics>\u0000 <mo>%</mo>\u0000 <annotation>$%$</annotation>\u0000 </semantics></math> when adopting this system, resulting in a lower X-ray exposure.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"392-401"},"PeriodicalIF":2.8,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665791/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Han Zhang, Benjamin D. Killeen, Yu-Chun Ku, Lalithkumar Seenivasan, Yuxuan Zhao, Mingxu Liu, Yue Yang, Suxi Gu, Alejandro Martin-Gomez, Taylor, Greg Osgood, Mathias Unberath
{"title":"StraightTrack: Towards mixed reality navigation system for percutaneous K-wire insertion","authors":"Han Zhang, Benjamin D. Killeen, Yu-Chun Ku, Lalithkumar Seenivasan, Yuxuan Zhao, Mingxu Liu, Yue Yang, Suxi Gu, Alejandro Martin-Gomez, Taylor, Greg Osgood, Mathias Unberath","doi":"10.1049/htl2.12103","DOIUrl":"10.1049/htl2.12103","url":null,"abstract":"<p>In percutaneous pelvic trauma surgery, accurate placement of Kirschner wires (K-wires) is crucial to ensure effective fracture fixation and avoid complications due to breaching the cortical bone along an unsuitable trajectory. Surgical navigation via mixed reality (MR) can help achieve precise wire placement in a low-profile form factor. Current approaches in this domain are as yet unsuitable for real-world deployment because they fall short of guaranteeing accurate visual feedback due to uncontrolled bending of the wire. To ensure accurate feedback, StraightTrack, an MR navigation system designed for percutaneous wire placement in complex anatomy, is introduced. StraightTrack features a marker body equipped with a rigid access cannula that mitigates wire bending due to interactions with soft tissue and a covered bony surface. Integrated with an optical see-through head-mounted display capable of tracking the cannula body, StraightTrack offers real-time 3D visualization and guidance without external trackers, which are prone to losing line-of-sight. In phantom experiments with two experienced orthopedic surgeons, StraightTrack improves wire placement accuracy, achieving the ideal trajectory within <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>5.26</mn>\u0000 <mo>±</mo>\u0000 <mn>2.29</mn>\u0000 </mrow>\u0000 <annotation>$5.26 pm 2.29$</annotation>\u0000 </semantics></math> mm and <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>2.88</mn>\u0000 <mo>±</mo>\u0000 <mn>1.49</mn>\u0000 </mrow>\u0000 <annotation>$2.88 pm 1.49$</annotation>\u0000 </semantics></math><span></span><math>\u0000 <semantics>\u0000 <msup>\u0000 <mrow></mrow>\u0000 <mo>∘</mo>\u0000 </msup>\u0000 <annotation>$^circ$</annotation>\u0000 </semantics></math>, compared to over 12.08 mm and 4.07<span></span><math>\u0000 <semantics>\u0000 <msup>\u0000 <mrow></mrow>\u0000 <mo>∘</mo>\u0000 </msup>\u0000 <annotation>$^circ$</annotation>\u0000 </semantics></math> for comparable methods. As MR navigation systems continue to mature, StraightTrack realizes their potential for internal fracture fixation and other percutaneous orthopedic procedures.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"355-364"},"PeriodicalIF":2.8,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665788/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrito Das, Bilal Sidiqi, Laurent Mennillo, Zhehua Mao, Mikael Brudfors, Miguel Xochicale, Danyal Z. Khan, Nicola Newall, John G. Hanrahan, Matthew J. Clarkson, Danail Stoyanov, Hani J. Marcus, Sophia Bano
{"title":"Automated surgical skill assessment in endoscopic pituitary surgery using real-time instrument tracking on a high-fidelity bench-top phantom","authors":"Adrito Das, Bilal Sidiqi, Laurent Mennillo, Zhehua Mao, Mikael Brudfors, Miguel Xochicale, Danyal Z. Khan, Nicola Newall, John G. Hanrahan, Matthew J. Clarkson, Danail Stoyanov, Hani J. Marcus, Sophia Bano","doi":"10.1049/htl2.12101","DOIUrl":"10.1049/htl2.12101","url":null,"abstract":"<p>Improved surgical skill is generally associated with improved patient outcomes, although assessment is subjective, labour intensive, and requires domain-specific expertise. Automated data-driven metrics can alleviate these difficulties, as demonstrated by existing machine learning instrument tracking models. However, these models are tested on limited datasets of laparoscopic surgery, with a focus on isolated tasks and robotic surgery. Here, a new public dataset is introduced: the nasal phase of simulated endoscopic pituitary surgery. Simulated surgery allows for a realistic yet repeatable environment, meaning the insights gained from automated assessment can be used by novice surgeons to hone their skills on the simulator before moving to real surgery. Pituitary Real-time INstrument Tracking Network (PRINTNet) has been created as a baseline model for this automated assessment. Consisting of DeepLabV3 for classification and segmentation, StrongSORT for tracking, and the NVIDIA Holoscan for real-time performance, PRINTNet achieved 71.9% multiple object tracking precision running at 22 frames per second. Using this tracking output, a multilayer perceptron achieved 87% accuracy in predicting surgical skill level (novice or expert), with the ‘ratio of total procedure time to instrument visible time’ correlated with higher surgical skill. The new publicly available dataset can be found at https://doi.org/10.5522/04/26511049.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"336-344"},"PeriodicalIF":2.8,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665785/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142884851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}