Shuwei Xing, Derek W. Cool, David Tessier, Elvis C. S. Chen, Terry M. Peters, Aaron Fenster
{"title":"Deep regression 2D-3D ultrasound registration for liver motion correction in focal tumour thermal ablation","authors":"Shuwei Xing, Derek W. Cool, David Tessier, Elvis C. S. Chen, Terry M. Peters, Aaron Fenster","doi":"10.1049/htl2.12117","DOIUrl":"https://doi.org/10.1049/htl2.12117","url":null,"abstract":"<p>Liver tumour ablation procedures require accurate placement of the needle applicator at the tumour centroid. The lower-cost and real-time nature of ultrasound (US) has advantages over computed tomography for applicator guidance, however, in some patients, liver tumours may be occult on US and tumour mimics can make lesion identification challenging. Image registration techniques can aid in interpreting anatomical details and identifying tumours, but their clinical application has been hindered by the tradeoff between alignment accuracy and runtime performance, particularly when compensating for liver motion due to patient breathing or movement. Therefore, we propose a 2D–3D US registration approach to enable intra-procedural alignment that mitigates errors caused by liver motion. Specifically, our approach can correlate imbalanced 2D and 3D US image features and use continuous 6D rotation representations to enhance the model's training stability. The dataset was divided into 2388, 196, and 193 image pairs for training, validation and testing, respectively. Our approach achieved a mean Euclidean distance error of <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>2.28</mn>\u0000 <mspace></mspace>\u0000 <mi>m</mi>\u0000 <mi>m</mi>\u0000 </mrow>\u0000 <annotation>$2.28 ,mathrm{m}mathrm{m}$</annotation>\u0000 </semantics></math> <span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math> <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>1.81</mn>\u0000 <mspace></mspace>\u0000 <mi>m</mi>\u0000 <mi>m</mi>\u0000 </mrow>\u0000 <annotation>$1.81 ,mathrm{m}mathrm{m}$</annotation>\u0000 </semantics></math> and a mean geodesic angular error of <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>2.99</mn>\u0000 <msup>\u0000 <mspace></mspace>\u0000 <mo>∘</mo>\u0000 </msup>\u0000 </mrow>\u0000 <annotation>$2.99 ,mathrm{^{circ }}$</annotation>\u0000 </semantics></math> <span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math> <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>1.95</mn>\u0000 <msup>\u0000 <mspace></mspace>\u0000 <mo>∘</mo>\u0000 </msup>\u0000 </mrow>\u0000 <annotation>$1.95 ,mathrm{^{circ }}$</annotation>\u0000 </semantics></math>, with a runtime of <span></span><math>\u0000 <semantics>\u0000 <mrow>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12117","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143431264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ngoc Truc Ngan Ho, Paulina Gonzalez, Gideon K. Gogovi
{"title":"Writing the Signs: An Explainable Machine Learning Approach for Alzheimer's Disease Classification from Handwriting","authors":"Ngoc Truc Ngan Ho, Paulina Gonzalez, Gideon K. Gogovi","doi":"10.1049/htl2.70006","DOIUrl":"https://doi.org/10.1049/htl2.70006","url":null,"abstract":"<p>Alzheimer's disease is a global health challenge, emphasizing the need for early detection to enable timely intervention and improve outcomes. This study analyzes handwriting data from individuals with and without Alzheimer's to identify predictive features across copying, graphic and memory-based tasks. Machine learning models, including Random Forest, Bootstrap Aggregating (Bagging), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), Adaptive Boosting (AdaBoost) and Gradient Boosting, were applied to classify patients, with SHapley Additive exPlanations (SHAP) enhancing model interpretability. Time-related features were crucial in copying and graphic tasks, reflecting cognitive processing speed, while pressure-related features were significant in memory tasks, indicating recall confidence. Simpler graphic tasks showed strong discriminatory power, aiding early detection. Performance metrics demonstrated model effectiveness: For memory tasks, Random Forest achieved the highest accuracy (<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.840</mn>\u0000 <mo>±</mo>\u0000 <mn>0.038</mn>\u0000 </mrow>\u0000 <annotation>$0.840 pm 0.038$</annotation>\u0000 </semantics></math>), while Bagged SVC was the lowest (<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.617</mn>\u0000 <mo>±</mo>\u0000 <mn>0.046</mn>\u0000 </mrow>\u0000 <annotation>$0.617 pm 0.046$</annotation>\u0000 </semantics></math>). Copying tasks recorded a peak accuracy of <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.804</mn>\u0000 <mo>±</mo>\u0000 <mn>0.075</mn>\u0000 </mrow>\u0000 <annotation>$0.804 pm 0.075$</annotation>\u0000 </semantics></math> with Gradient Boost and a low of <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.566</mn>\u0000 <mo>±</mo>\u0000 <mn>0.032</mn>\u0000 </mrow>\u0000 <annotation>$0.566 pm 0.032$</annotation>\u0000 </semantics></math> for Bagged SVC. Graphic tasks reached <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.799</mn>\u0000 <mo>±</mo>\u0000 <mn>0.041</mn>\u0000 </mrow>\u0000 <annotation>$0.799 pm 0.041$</annotation>\u0000 </semantics></math> with Gradient Boost and 0.643 ± 0.071 with AdaBoost. For all tasks combined, Random Forest excelled (<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.854</mn>\u0000 <mo>±</mo>\u0000 <mn>0.033</mn>\u0000 </mrow>\u0000 <annotati","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.70006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143396889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying factors shaping the behavioural intention of Nepalese youths to adopt digital health tools","authors":"Sujal Mani Timsina, Ujjwal Bhattarai","doi":"10.1049/htl2.70005","DOIUrl":"https://doi.org/10.1049/htl2.70005","url":null,"abstract":"<p>The digitalization of healthcare has gained global importance, especially post-COVID-19, yet remains a challenge in developing countries due to the slow adoption of digital health tools. This study aims to identify major predictors impacting the behavioural intention of Nepalese youths to adopt digital health tools by utilizing the framework based on the extended unified theory of acceptance and use of technology (UTAUT-2). The cross-sectional data from 280 respondents was collected from youths (i.e., aged 16-40) in the Kathmandu Valley and were analyzed through PLS-SEM. Most of the respondents were using smartwatches followed by blood pressure monitors and pulse oximeters. The findings revealed hedonic motivation as the strongest predictor of behavioural intention to use digital health tools followed by facilitating conditions, social influence, habit, and performance expectancy. The behavioural intention significantly influenced actual usage behaviour. Additionally, behavioural intention mediated the relationship between the above-mentioned five constructs and usage behaviour, except for effort expectancy and price value. The study emphasizes the role of major predictors such as facilitating conditions in shaping the intention of youths to adopt digital health tools providing insights for government, hospitals, and developers to understand consumer perceptions and motivations.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.70005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image synthesis with class-aware semantic diffusion models for surgical scene segmentation","authors":"Yihang Zhou, Rebecca Towning, Zaid Awad, Stamatia Giannarou","doi":"10.1049/htl2.70003","DOIUrl":"10.1049/htl2.70003","url":null,"abstract":"<p>Surgical scene segmentation is essential for enhancing surgical precision, yet it is frequently compromised by the scarcity and imbalance of available data. To address these challenges, semantic image synthesis methods based on generative adversarial networks and diffusion models have been developed. However, these models often yield non-diverse images and fail to capture small, critical tissue classes, limiting their effectiveness. In response, a class-aware semantic diffusion model (CASDM), a novel approach which utilizes segmentation maps as conditions for image synthesis to tackle data scarcity and imbalance is proposed. Novel class-aware mean squared error and class-aware self-perceptual loss functions have been defined to prioritize critical, less visible classes, thereby enhancing image quality and relevance. Furthermore, to the authors' knowledge, they are the first to generate multi-class segmentation maps using text prompts in a novel fashion to specify their contents. These maps are then used by CASDM to generate surgical scene images, enhancing datasets for training and validating segmentation models. This evaluation assesses both image quality and downstream segmentation performance, demonstrates the strong effectiveness and generalisability of CASDM in producing realistic image-map pairs, significantly advancing surgical scene segmentation across diverse and challenging datasets.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11783686/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Hellstén, Jari Arokoski, Jonny Karlsson, Leena Ristolainen, Jyrki Kettunen
{"title":"Reliability and validity of computer vision-based markerless human pose estimation for measuring hip and knee range of motion","authors":"Thomas Hellstén, Jari Arokoski, Jonny Karlsson, Leena Ristolainen, Jyrki Kettunen","doi":"10.1049/htl2.70002","DOIUrl":"10.1049/htl2.70002","url":null,"abstract":"<p>Telerehabilitation requires accurate joint range of motion (ROM) measurement methods. The aim of this study was to evaluate the reliability and validity of a computer vision (CV)-based markerless human pose estimation (HPE) application measuring active hip and knee ROMs. For this study, the joint ROM of 30 healthy young adults (10 females, 20 males) aged 20–33 years (mean: 22.9 years) was measured, and test–retests were assessed for reliability. For validity evaluation, the CV-based markerless HPE application used in this study was compared with an identical reference picture frame. The intraclass correlation coefficient (ICC) for the CV-based markerless HPE application was 0.93 for active hip inner rotation, 0.83 for outer rotation, 0.82 for flexion, 0.82 for extension, and 0.74 for knee flexion. Correlations (<i>r</i>) of the two measurement methods were 0.99 for hip-active inner rotation, 0.98 for outer rotation, 0.87 for flexion, 0.85 for extension, and 0.90 for knee flexion. This study highlights the potential of a CV-based markerless HPE application as a reliable and valid tool for measuring hip and knee joint ROM. It could offer an accessible solution for telerehabilitation, enabling ROM monitoring.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11783685/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Incremental shape integration with inter-frame shape consistency using neural SDF for a 3D endoscopic system","authors":"Ryo Furukawa, Hiroshi Kawasaki, Ryusuke Sagawa","doi":"10.1049/htl2.70001","DOIUrl":"10.1049/htl2.70001","url":null,"abstract":"<p>3D measurement for endoscopic systems has been largely demanded. One promising approach is to utilize active-stereo systems using a micro-sized pattern-projector attached to the head of an endoscope. Furthermore, a multi-frame integration is also desired to enlarge the reconstructed area. This paper proposes an incremental optimization technique of both the shape-field parameters and the positional parameters of the cameras and projectors. The method assumes that the input data is temporarily sequential images, that is, endoscopic videos, and the relative positions between the camera and the projector may vary continuously. As solution, a differential volume rendering algorithm in conjunction with neural signed distance field (NeuralSDF) representation is proposed to simultaneously optimize the 3D scene and the camera/projector poses. Also, an incremental optimization strategy where the optimized frames are gradually increased is proposed. In the experiment, the proposed method is evaluated by performing 3D reconstruction using both synthetic and real images, proving the effectiveness of our method.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11780497/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143068611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucía Salazar Carrasco, Ignacio Sánchez-Varo, Daniel Caballero Jorna, Amaia Iribar-Zabala, Álvaro Bertelsen-Simonetti, Juan A. Sánchez-Margallo, Francisco M. Sánchez-Margallo
{"title":"System for assistance in ultrasound-guided percutaneous hepatic interventions using augmented reality: First steps","authors":"Lucía Salazar Carrasco, Ignacio Sánchez-Varo, Daniel Caballero Jorna, Amaia Iribar-Zabala, Álvaro Bertelsen-Simonetti, Juan A. Sánchez-Margallo, Francisco M. Sánchez-Margallo","doi":"10.1049/htl2.12110","DOIUrl":"https://doi.org/10.1049/htl2.12110","url":null,"abstract":"<p>This study aims to develop a system based on mixed reality imaging for assistance in percutaneous ultrasound-guided liver interventions. A liver phantom, an ultrasound scanner with an abdominal probe, an electromagnetic tracking system for localization of the ultrasound probe, and the percutaneous needle were used to develop the system. A gelatin liver phantom was fabricated, including a set of lesions (with agarose, glycerol, and sephadex) and artificial blood vessels 3D printed with thermoplastic polyurethane (TPU) elastic fibres. Intraoperative ultrasound images from a BK5000 ultrasound scanner were acquired using a video capture system and transmitted to 3D Slicer. The NDI Aurora electromagnetic tracking system, coupled to the ultrasound probe and intracorporeal needle, was used for real-time trajectory tracking, providing us with spatial localization information. These images are then sent to the augmented reality HoloLens device as the primary visualization system. This work lays the groundwork for the development of a more comprehensive system to assist ultrasound-guided percutaneous liver interventions in order to improve the accuracy and safety of these procedures. The use of mixed reality imaging technology allows a better integration of image-guided surgery systems, such as the one presented in this work, in real clinical environments, and closer to the patient.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12110","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143120405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaiful Ajam Opee, Arifa Akter Eva, Ahmed Taj Noor, Sayem Mustak Hasan, M. F. Mridha
{"title":"ELW-CNN: An extremely lightweight convolutional neural network for enhancing interoperability in colon and lung cancer identification using explainable AI","authors":"Shaiful Ajam Opee, Arifa Akter Eva, Ahmed Taj Noor, Sayem Mustak Hasan, M. F. Mridha","doi":"10.1049/htl2.12122","DOIUrl":"10.1049/htl2.12122","url":null,"abstract":"<p>Cancer is a condition in which cells in the body grow uncontrollably, often forming tumours and potentially spreading to various areas of the body. Cancer is a hazardous medical case in medical history analysis. Every year, many people die of cancer at an early stage. Therefore, it is necessary to accurately and early identify cancer to effectively treat and save human lives. However, various machine and deep learning models are effective for cancer identification. Therefore, the effectiveness of these efforts is limited by the small dataset size, poor data quality, interclass changes between lung squamous cell carcinoma and adenocarcinoma, difficulties with mobile device deployment, and lack of image and individual-level accuracy tests. To overcome these difficulties, this study proposed an extremely lightweight model using a convolutional neural network that achieved 98.16% accuracy for a large lung and colon dataset and individually achieved 99.02% for lung cancer and 99.40% for colon cancer. The proposed lightweight model used only 70 thousand parameters, which is highly effective for real-time solutions. Explainability methods such as Grad-CAM and symmetric explanation highlight specific regions of input data that affect the decision of the proposed model, helping to identify potential challenges. The proposed models will aid medical professionals in developing an automated and accurate approach for detecting various types of colon and lung cancer.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11751720/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cristian A. Linte, Ziv Yaniv, Elvis Chen, Simon Drouin, Marta Kersten-Oertel, Jonathan McLeod, Duygu Sarikaya, Jiangliu Wang
{"title":"Guest editorial: Papers from the 18th joint workshop on Augmented Environments for Computer Assisted Interventions (AE-CAI) at MICCAI 2024: Guest editors’ foreword","authors":"Cristian A. Linte, Ziv Yaniv, Elvis Chen, Simon Drouin, Marta Kersten-Oertel, Jonathan McLeod, Duygu Sarikaya, Jiangliu Wang","doi":"10.1049/htl2.70000","DOIUrl":"10.1049/htl2.70000","url":null,"abstract":"<p>Welcome to this special issue of Wiley's IET Healthcare Technology Letters (HTL) dedicated to the 2024 edition of the augmented environments for computer-assisted interventions (AE-CAI), computer assisted and robotic endoscopy (CARE), and context-aware operating theatres (OR 2.0) joint workshop. We are pleased to present the proceedings of this exciting scientific gathering held in conjunction with the medical image computing and computer-assisted interventions (MICCAI) conference on October 6th, 2024 in Marrakech, Morocco.</p><p>Computer-assisted interventions (CAI) is a field of research and practice, where medical interventions are supported by computer-based tools and methodologies. CAI systems enable more precise, safer, and less invasive interventional treatments by providing enhanced planning, real-time visualization, instrument guidance and navigation, as well as situation awareness and cognition. These research domains have been motivated by the development of medical imaging and its evolution from being primarily a diagnostic modality towards its use as a therapeutic and interventional aid, driven by the need to streamline the diagnostic and therapeutic processes via minimally invasive visualization and therapy. To promote this field of research, our workshop seeks to showcase papers that disseminate novel theoretical algorithms, technical implementations, and development and validation of integrated hardware and software systems in the context of their dedicated clinical applications. The workshop attracts researchers in computer science, biomedical engineering, computer vision, robotics, and medical imaging.</p><p>The 2024 edition of AE-CAI | CARE | OR 2.0 was a joint event between the series of MICCAI-affiliated AE-CAI workshops founded in 2006 and now on its 18th edition, the CARE workshop series, now on its 11th edition, and the OR 2.0 workshop now on its 6th edition. This year's edition of the workshop featured 24 accepted submissions and reached more than 70 registrants, not including the members of the organizing and program committees, making AE-CAI | CARE | OR 2.0 one of the best received and best attended workshops with more than a decade-long standing tradition at MICCAI.</p><p>On the above note of “more than a decade-long standing tradition at MICCAI”, it turns out that AE-CAI, albeit several variations in name, has been running for a while now in some shape or form and is, in fact, MICCAI's longest-standing workshop! Let us start with a historical note for those less familiar with our journey!</p><p>It all started in 2006 in Copenhagen under the name of AMI-ARCS, which pointed to something along the lines of augmented medical imaging and augmented reality for computer-assisted surgery, and it ran under that name for three more years, in Brisbane (2007), New York (2008), and London (2009). The 2010 edition (Beijing) was co-hosted with the MIAR (medical imaging and augmented reality) conference. The workshop was then rebr","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11744466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zahra Asadi, Joshua Pardillo Castillo, Mehrdad Asadi, David S. Sinclair, Marta Kersten-Oertel
{"title":"iSurgARy: A mobile augmented reality solution for ventriculostomy in resource-limited settings","authors":"Zahra Asadi, Joshua Pardillo Castillo, Mehrdad Asadi, David S. Sinclair, Marta Kersten-Oertel","doi":"10.1049/htl2.12118","DOIUrl":"10.1049/htl2.12118","url":null,"abstract":"<p>Global disparities in neurosurgical care necessitate innovations addressing affordability and accuracy, particularly for critical procedures like ventriculostomy. This intervention, vital for managing life-threatening intracranial pressure increases, is associated with catheter misplacement rates exceeding 30% when using a freehand technique. Such misplacements hold severe consequences including haemorrhage, infection, prolonged hospital stays, and even morbidity and mortality. To address this issue, a novel, stand-alone mobile-based augmented reality system (iSurgARy) aimed at significantly improving ventriculostomy accuracy, particularly in resource-limited settings such as those in low- and middle-income countries is presented. iSurgARy uses landmark based registration by taking advantage of light detection and ranging to allow for accurate surgical guidance. To evaluate iSurgARy, a two-phase user study is conducted. Initially, the usability and learnability is assessed with novice participants using the system usability scale (SUS), incorporating their feedback to refine the application. In the second phase, human-computer interaction and clinical domain experts are engaged to evaluate this application, measuring root mean square error, SUS and NASA task load index metrics to assess accuracy usability, and cognitive workload, respectively.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11733309/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}