Han Zhang, Benjamin D. Killeen, Yu-Chun Ku, Lalithkumar Seenivasan, Yuxuan Zhao, Mingxu Liu, Yue Yang, Suxi Gu, Alejandro Martin-Gomez, Taylor, Greg Osgood, Mathias Unberath
{"title":"StraightTrack: Towards mixed reality navigation system for percutaneous K-wire insertion","authors":"Han Zhang, Benjamin D. Killeen, Yu-Chun Ku, Lalithkumar Seenivasan, Yuxuan Zhao, Mingxu Liu, Yue Yang, Suxi Gu, Alejandro Martin-Gomez, Taylor, Greg Osgood, Mathias Unberath","doi":"10.1049/htl2.12103","DOIUrl":"10.1049/htl2.12103","url":null,"abstract":"<p>In percutaneous pelvic trauma surgery, accurate placement of Kirschner wires (K-wires) is crucial to ensure effective fracture fixation and avoid complications due to breaching the cortical bone along an unsuitable trajectory. Surgical navigation via mixed reality (MR) can help achieve precise wire placement in a low-profile form factor. Current approaches in this domain are as yet unsuitable for real-world deployment because they fall short of guaranteeing accurate visual feedback due to uncontrolled bending of the wire. To ensure accurate feedback, StraightTrack, an MR navigation system designed for percutaneous wire placement in complex anatomy, is introduced. StraightTrack features a marker body equipped with a rigid access cannula that mitigates wire bending due to interactions with soft tissue and a covered bony surface. Integrated with an optical see-through head-mounted display capable of tracking the cannula body, StraightTrack offers real-time 3D visualization and guidance without external trackers, which are prone to losing line-of-sight. In phantom experiments with two experienced orthopedic surgeons, StraightTrack improves wire placement accuracy, achieving the ideal trajectory within <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>5.26</mn>\u0000 <mo>±</mo>\u0000 <mn>2.29</mn>\u0000 </mrow>\u0000 <annotation>$5.26 pm 2.29$</annotation>\u0000 </semantics></math> mm and <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>2.88</mn>\u0000 <mo>±</mo>\u0000 <mn>1.49</mn>\u0000 </mrow>\u0000 <annotation>$2.88 pm 1.49$</annotation>\u0000 </semantics></math><span></span><math>\u0000 <semantics>\u0000 <msup>\u0000 <mrow></mrow>\u0000 <mo>∘</mo>\u0000 </msup>\u0000 <annotation>$^circ$</annotation>\u0000 </semantics></math>, compared to over 12.08 mm and 4.07<span></span><math>\u0000 <semantics>\u0000 <msup>\u0000 <mrow></mrow>\u0000 <mo>∘</mo>\u0000 </msup>\u0000 <annotation>$^circ$</annotation>\u0000 </semantics></math> for comparable methods. As MR navigation systems continue to mature, StraightTrack realizes their potential for internal fracture fixation and other percutaneous orthopedic procedures.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"355-364"},"PeriodicalIF":3.3,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665788/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrito Das, Bilal Sidiqi, Laurent Mennillo, Zhehua Mao, Mikael Brudfors, Miguel Xochicale, Danyal Z. Khan, Nicola Newall, John G. Hanrahan, Matthew J. Clarkson, Danail Stoyanov, Hani J. Marcus, Sophia Bano
{"title":"Automated surgical skill assessment in endoscopic pituitary surgery using real-time instrument tracking on a high-fidelity bench-top phantom","authors":"Adrito Das, Bilal Sidiqi, Laurent Mennillo, Zhehua Mao, Mikael Brudfors, Miguel Xochicale, Danyal Z. Khan, Nicola Newall, John G. Hanrahan, Matthew J. Clarkson, Danail Stoyanov, Hani J. Marcus, Sophia Bano","doi":"10.1049/htl2.12101","DOIUrl":"10.1049/htl2.12101","url":null,"abstract":"<p>Improved surgical skill is generally associated with improved patient outcomes, although assessment is subjective, labour intensive, and requires domain-specific expertise. Automated data-driven metrics can alleviate these difficulties, as demonstrated by existing machine learning instrument tracking models. However, these models are tested on limited datasets of laparoscopic surgery, with a focus on isolated tasks and robotic surgery. Here, a new public dataset is introduced: the nasal phase of simulated endoscopic pituitary surgery. Simulated surgery allows for a realistic yet repeatable environment, meaning the insights gained from automated assessment can be used by novice surgeons to hone their skills on the simulator before moving to real surgery. Pituitary Real-time INstrument Tracking Network (PRINTNet) has been created as a baseline model for this automated assessment. Consisting of DeepLabV3 for classification and segmentation, StrongSORT for tracking, and the NVIDIA Holoscan for real-time performance, PRINTNet achieved 71.9% multiple object tracking precision running at 22 frames per second. Using this tracking output, a multilayer perceptron achieved 87% accuracy in predicting surgical skill level (novice or expert), with the ‘ratio of total procedure time to instrument visible time’ correlated with higher surgical skill. The new publicly available dataset can be found at https://doi.org/10.5522/04/26511049.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"336-344"},"PeriodicalIF":3.3,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665785/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142884851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alfie Roddan, Tobias Czempiel, Daniel S. Elson, Stamatia Giannarou
{"title":"Calibration-Jitter: Augmentation of hyperspectral data for improved surgical scene segmentation","authors":"Alfie Roddan, Tobias Czempiel, Daniel S. Elson, Stamatia Giannarou","doi":"10.1049/htl2.12102","DOIUrl":"10.1049/htl2.12102","url":null,"abstract":"<p>Semantic surgical scene segmentation is crucial for accurately identifying and delineating different tissue types during surgery, enhancing outcomes and reducing complications. Hyperspectral imaging provides detailed information beyond visible color filters, offering an enhanced view of tissue characteristics. Combined with machine learning, it supports critical tumor resection decisions. Traditional augmentations fail to effectively train machine learning models on illumination and sensor sensitivity variations. Learning to handle these variations is crucial to enable models to better generalize, ultimately enhancing their reliability in deployment. In this article, <i>Calibration-Jitter</i> is introduced, a spectral augmentation technique that leverages hyperspectral calibration variations to improve predictive performance. Evaluated on scene segmentation on a neurosurgical dataset, <i>Calibration-Jitter</i> achieved a F1-score of 74.35% with SegFormer, surpassing the previous best of 70.2%. This advancement addresses limitations of traditional augmentations, improving hyperspectral imaging segmentation performance.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"345-354"},"PeriodicalIF":3.3,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665780/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Occlusion-robust markerless surgical instrument pose estimation","authors":"Haozheng Xu, Stamatia Giannarou","doi":"10.1049/htl2.12100","DOIUrl":"10.1049/htl2.12100","url":null,"abstract":"<p>The estimation of the pose of surgical instruments is important in Robot-assisted Minimally Invasive Surgery (RMIS) to assist surgical navigation and enable autonomous robotic task execution. The performance of current instrument pose estimation methods deteriorates significantly in the presence of partial tool visibility, occlusions, and changes in the surgical scene. In this work, a vision-based framework is proposed for markerless estimation of the 6DoF pose of surgical instruments. To deal with partial instrument visibility, a keypoint object representation is used and stable and accurate instrument poses are computed using a PnP solver. To boost the learning process of the model under occlusion, a new mask-based data augmentation approach has been proposed. To validate the model, a dataset for instrument pose estimation with highly accurate ground truth data has been generated using different surgical robotic instruments. The proposed network can achieve submillimeter accuracy and the experimental results verify its generalisability to different shapes of occlusion.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"327-335"},"PeriodicalIF":3.3,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665797/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anjana Wijekoon, Adrito Das, Roxana R. Herrera, Danyal Z. Khan, John Hanrahan, Eleanor Carter, Valpuri Luoma, Danail Stoyanov, Hani J. Marcus, Sophia Bano
{"title":"PitRSDNet: Predicting intra-operative remaining surgery duration in endoscopic pituitary surgery","authors":"Anjana Wijekoon, Adrito Das, Roxana R. Herrera, Danyal Z. Khan, John Hanrahan, Eleanor Carter, Valpuri Luoma, Danail Stoyanov, Hani J. Marcus, Sophia Bano","doi":"10.1049/htl2.12099","DOIUrl":"10.1049/htl2.12099","url":null,"abstract":"<p>Accurate intra-operative Remaining Surgery Duration (RSD) predictions allow for anaesthetists to more accurately decide when to administer anaesthetic agents and drugs, as well as to notify hospital staff to send in the next patient. Therefore, RSD plays an important role in improved patient care and minimising surgical theatre costs via efficient scheduling. In endoscopic pituitary surgery, it is uniquely challenging due to variable workflow sequences with a selection of optional steps contributing to high variability in surgery duration. This article presents PitRSDNet for predicting RSD during pituitary surgery, a spatio-temporal neural network model that learns from historical data focusing on workflow sequences. PitRSDNet integrates workflow knowledge into RSD prediction in two forms: (1) multi-task learning for concurrently predicting step and RSD; and (2) incorporating prior steps as context in temporal learning and inference. PitRSDNet is trained and evaluated on a new endoscopic pituitary surgery dataset with 88 videos to show competitive performance improvements over previous statistical and machine learning methods. The findings also highlight how PitRSDNet improves RSD precision on outlier cases utilising the knowledge of prior steps.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"318-326"},"PeriodicalIF":3.3,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665798/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tobias Czempiel, Alfie Roddan, Maria Leiloglou, Zepeng Hu, Kevin O'Neill, Giulio Anichini, Danail Stoyanov, Daniel Elson
{"title":"RGB to hyperspectral: Spectral reconstruction for enhanced surgical imaging","authors":"Tobias Czempiel, Alfie Roddan, Maria Leiloglou, Zepeng Hu, Kevin O'Neill, Giulio Anichini, Danail Stoyanov, Daniel Elson","doi":"10.1049/htl2.12098","DOIUrl":"10.1049/htl2.12098","url":null,"abstract":"<p>This study investigates the reconstruction of hyperspectral signatures from RGB data to enhance surgical imaging, utilizing the publicly available HeiPorSPECTRAL dataset from porcine surgery and an in-house neurosurgery dataset. Various architectures based on convolutional neural networks (CNNs) and transformer models are evaluated using comprehensive metrics. Transformer models exhibit superior performance in terms of RMSE, SAM, PSNR and SSIM by effectively integrating spatial information to predict accurate spectral profiles, encompassing both visible and extended spectral ranges. Qualitative assessments demonstrate the capability to predict spectral profiles critical for informed surgical decision-making during procedures. Challenges associated with capturing both the visible and extended hyperspectral ranges are highlighted using the MAE, emphasizing the complexities involved. The findings open up the new research direction of hyperspectral reconstruction for surgical applications and clinical use cases in real-time surgical environments.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"307-317"},"PeriodicalIF":3.3,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665794/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Najmeh Sadat Jaddi, Salwani Abdullah, Say Leng Goh, Mohammad Kamrul Hasan
{"title":"A related convolutional neural network for cancer diagnosis using microRNA data classification","authors":"Najmeh Sadat Jaddi, Salwani Abdullah, Say Leng Goh, Mohammad Kamrul Hasan","doi":"10.1049/htl2.12097","DOIUrl":"10.1049/htl2.12097","url":null,"abstract":"<p>This paper develops a method for cancer classification from microRNA data using a convolutional neural network (CNN)-based model optimized by genetic algorithm. The convolutional neural network has performed well in various recognition and perception tasks. This paper contributes to the cancer classification using a union of two CNNs. The method's performance is boosted by the relationship between CNNs and exchanging knowledge between them. Besides, communication between small sizes of CNNs reduces the need for large size CNNs and, consequently, the computational time and memory usage while preserving high accuracy. The method proposed is tested on microRNA dataset containing the genomic information of 8129 patients for 29 different types of cancer with 1046 gene expression. The classification accuracy of the selected genes obtained by the proposed approach is compared with the accuracy of 22 well-known classifiers on a real-world dataset. The classification accuracy of each cancer type is also ranked with the results of 77 classifiers reported in previous works. The proposed approach shows accuracy of 100% in 24 out of 29 classes and in seven cases out of 29, the method achieved 100% accuracy that no classifier in other studies has reached. Performance analysis is performed using performance metrics.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"485-495"},"PeriodicalIF":3.3,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665793/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inês M. Lúcio, Bernardo G. de Faria, Renata G. Raidou, Luís Proença, Carlos Zagalo, José João Mendes, Pedro Rodrigues, Daniel Simões Lopes
{"title":"Knowledge maps as a complementary tool to learn and teach surgical anatomy in virtual reality: A case study in dental implantology","authors":"Inês M. Lúcio, Bernardo G. de Faria, Renata G. Raidou, Luís Proença, Carlos Zagalo, José João Mendes, Pedro Rodrigues, Daniel Simões Lopes","doi":"10.1049/htl2.12094","DOIUrl":"10.1049/htl2.12094","url":null,"abstract":"<p>A thorough understanding of surgical anatomy is essential for preparing and training medical students to become competent and skilled surgeons. While Virtual Reality (VR) has shown to be a suitable interaction paradigm for surgical training, traditional anatomical VR models often rely on simple labels and arrows pointing to relevant landmarks. Yet, studies have indicated that such visual settings could benefit from knowledge maps as such representations explicitly illustrate the conceptual connections between anatomical landmarks. In this article, a VR educational tool is presented designed to explore the potential of knowledge maps as a complementary visual encoding for labeled 3D anatomy models. Focusing on surgical anatomy for implantology, it was investigated whether integrating knowledge maps within a VR environment could improve students' understanding and retention of complex anatomical relationships. The study involved 30 master's students in dentistry and 3 anatomy teachers, who used the tool and were subsequently assessed through surgical anatomy quizzes (measuring both completion times and scores) and subjective feedback (assessing user satisfaction, preferences, system usability, and task workload). The results showed that using knowledge maps in an immersive environment facilitates learning and teaching surgical anatomy applied to implantology, serving as a complementary tool to conventional VR educational methods.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"289-300"},"PeriodicalIF":3.3,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665781/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahsan Fiaz, Basit Raza, Muhammad Faheem, Aadil Raza
{"title":"A deep fusion-based vision transformer for breast cancer classification","authors":"Ahsan Fiaz, Basit Raza, Muhammad Faheem, Aadil Raza","doi":"10.1049/htl2.12093","DOIUrl":"10.1049/htl2.12093","url":null,"abstract":"<p>Breast cancer is one of the most common causes of death in women in the modern world. Cancerous tissue detection in histopathological images relies on complex features related to tissue structure and staining properties. Convolutional neural network (CNN) models like ResNet50, Inception-V1, and VGG-16, while useful in many applications, cannot capture the patterns of cell layers and staining properties. Most previous approaches, such as stain normalization and instance-based vision transformers, either miss important features or do not process the whole image effectively. Therefore, a deep fusion-based vision Transformer model (DFViT) that combines CNNs and transformers for better feature extraction is proposed. DFViT captures local and global patterns more effectively by fusing RGB and stain-normalized images. Trained and tested on several datasets, such as BreakHis, breast cancer histology (BACH), and UCSC cancer genomics (UC), the results demonstrate outstanding accuracy, F1 score, precision, and recall, setting a new milestone in histopathological image analysis for diagnosing breast cancer.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"471-484"},"PeriodicalIF":3.3,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665795/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahmoud Ahmad Al-Khasawneh, Muhammad Faheem, Ala Abdulsalam Alarood, Safa Habibullah, Abdulrahman Alzahrani
{"title":"A secure blockchain framework for healthcare records management systems","authors":"Mahmoud Ahmad Al-Khasawneh, Muhammad Faheem, Ala Abdulsalam Alarood, Safa Habibullah, Abdulrahman Alzahrani","doi":"10.1049/htl2.12092","DOIUrl":"10.1049/htl2.12092","url":null,"abstract":"<p>Electronic health records are one of the essential components of health organizations. In recent years, there have been increased concerns about privacy and reputation regarding the storage and use of patient information. In this regard, the information provided as a part of medical and health insurance, for instance, can be viewed as proof of social insurance and governance. Several problems in the past few decades regarding medical information management have threatened patient information privacy. In intelligent healthcare applications, the privacy of patients' data is one of the main concerns. As a result, blockchain is a severe necessity as it can enhance transparency and security in medical applications. Accordingly, this paper uses the design science method to propose a secure blockchain framework for healthcare records management systems. The proposed framework comprises five components: a blockchain network, smart contracts, privacy key management, data encryption, and integration with healthcare information technology. In the proposed framework, healthcare organizations can manage healthcare information securely and privately. Additionally, a secure storage system for electronic records is proposed to meet these organizations' needs. It provides security and privacy for healthcare organizations, especially when managing healthcare information, and also proposes a secure storage system for electronic records to meet the needs of the organizations.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"461-470"},"PeriodicalIF":3.3,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665786/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}