Healthcare Technology Letters最新文献

筛选
英文 中文
Respiratory Rate Measurement Using Mobile Applications in Healthcare Settings: A Scoping Review 在医疗保健设置中使用移动应用程序进行呼吸率测量:范围审查。
IF 3.3
Healthcare Technology Letters Pub Date : 2026-01-28 DOI: 10.1049/htl2.70035
Lachlan Sallabank, James Oswald, Sian Willett, James Kelleher, Brian Haskins
{"title":"Respiratory Rate Measurement Using Mobile Applications in Healthcare Settings: A Scoping Review","authors":"Lachlan Sallabank,&nbsp;James Oswald,&nbsp;Sian Willett,&nbsp;James Kelleher,&nbsp;Brian Haskins","doi":"10.1049/htl2.70035","DOIUrl":"10.1049/htl2.70035","url":null,"abstract":"<p>Respiratory rate (RR) is a strong indicator of clinical trajectory and forms the basis of patient care and assessment. However, clinicians often face barriers to easily obtaining a RR without inefficient methods or costly technology. To remedy this, several phone applications have emerged where clinicians can tap out each breath to calculate a RR. We aimed to map the available evidence for tap-per-breath applications used in healthcare settings. We searched for articles using multiple databases, including primary research articles that evaluated tap-per-breath apps in healthcare settings. 14 articles were selected for this review, mostly cross-sectional and hospital based. Most applications reported high usability and efficiency, although results of accuracy were mixed across the included literature. Median-based apps were more often an accurate measure of RR, however more research is required. Articles were commonly limited in generalisability due to poorly defined reference standards, small sample sizes, or using retrospective video recordings for patient assessment. Studies showed favourable usability and efficiency across the literature, with median-based apps demonstrating greater consistency and accuracy of RR measurements. Though the scope of this review and limited evidence restrict any far-reaching clinical implications until further evidence emerges.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"13 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12850432/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wavelet-Based Denoising Optimization for Endoscopic Gastric Slow-Wave Recordings 内镜下胃慢波记录的小波去噪优化。
IF 3.3
Healthcare Technology Letters Pub Date : 2026-01-24 DOI: 10.1049/htl2.70052
Peter Tremain, Jarrah M. Dowrick, Leo K. Cheng, James W. McKeage, Carolina Saavedra, Julio Sotelo, Timothy R. Angeli-Gordon
{"title":"Wavelet-Based Denoising Optimization for Endoscopic Gastric Slow-Wave Recordings","authors":"Peter Tremain,&nbsp;Jarrah M. Dowrick,&nbsp;Leo K. Cheng,&nbsp;James W. McKeage,&nbsp;Carolina Saavedra,&nbsp;Julio Sotelo,&nbsp;Timothy R. Angeli-Gordon","doi":"10.1049/htl2.70052","DOIUrl":"10.1049/htl2.70052","url":null,"abstract":"<p>New, minimally invasive, endoscopic methods for recording gastric bioelectrical slow waves from the mucosal surface are emerging to address the current limitations of invasive recordings. Filtering techniques for these new methods have relied on protocols developed for invasive recordings. Updated signal processing techniques, such as discrete wavelet transformation (DWT), optimised for endoscopic recording conditions, promise more effective noise removal for these signals. Synthetic signals were constructed using averaged slow-wave data and noise segmented from existing endoscopic gastric bioelectrical recordings from 12 patients. DWT was performed on the synthetic signals using 989 different parameter combinations to remove noise. Savitzky-Golay (SG) filtering was also performed on the synthetic signals to provide a comparative baseline for classical filter performance. Combined SG filtering and DWT was then investigated using the top-performing DWT parameters. Filter performance was evaluated using six established metrics, along with the inspection of the power spectral density (PSD) calculated on sample signals. Statistical significance was analysed using a paired two-tailed Student's <i>t</i>-test or Wilcoxon signed-rank test. For signals with moderate signal-to-noise ratio (SNR), DWT-based methods outperformed traditional SG filtering in all metrics considered: signal-distortion ratio (0.84 ± 0.45 vs. 1.34 ± 0.99), root-mean-square error (280 ± 150 µV vs. 450 ± 330 µV), percentage root-mean-square difference (78 ± 42% vs. 113 ± 83%), noise-correction ratio (0.94±0.17 vs. 0.50±0.26), SNR improvement (5.9±3.0 dB vs. 2.1±2.7 dB) and filter performance metric (0.96 ± 0.42 vs. 1.8 ± 1.2). All <i>p</i>-values were &lt;0.05. The combination of SG filtering with DWT provided improved signal denoising when compared to SG filtering alone, whilst offering reduced aggressiveness when compared to DWT alone. Inspection of the calculated PSDs for sample signals reaffirmed these results. The results presented in this study indicate that for endoscopic gastric bioelectrical recordings, with moderate SNR, modern denoising techniques based on DWT can outperform traditional SG filtering. More efficient noise removal using DWT can allow for better automated detection of slow-wave activations and more reliable, efficient data processing.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"13 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12831173/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Evaluation of Ultrasound-Guided Peripheral Intravenous Catheter Insertion Techniques in a Virtual Reality Simulator 超声引导外周静脉导管插入技术在虚拟现实模拟器中的比较评价
IF 3.3
Healthcare Technology Letters Pub Date : 2026-01-06 DOI: 10.1049/htl2.70040
Alejandro Olivares, Canelle Schuhler-Husson, Yahia Zine, Simon Drouin
{"title":"Comparative Evaluation of Ultrasound-Guided Peripheral Intravenous Catheter Insertion Techniques in a Virtual Reality Simulator","authors":"Alejandro Olivares,&nbsp;Canelle Schuhler-Husson,&nbsp;Yahia Zine,&nbsp;Simon Drouin","doi":"10.1049/htl2.70040","DOIUrl":"https://doi.org/10.1049/htl2.70040","url":null,"abstract":"<p>Peripheral intravenous catheter (PIVC) insertion is a common yet challenging procedure. Although ultrasound guidance improves procedural accuracy and patient outcome, its complexity limits its routine adoption to highly experienced clinicians. This paper introduces a virtual reality (VR) simulator developed specifically for training in ultrasound-guided PIVC insertions. This study aims to validate the simulator's realism and relevance through face, content, and construct assessments, and to demonstrate its utility as a platform for comparing various approaches to PIVC insertion. Thirty participants from diverse medical backgrounds and levels of expertise completed three scenarios, each featuring a different procedural technique, within the simulator's controlled virtual environment. The simulator demonstrated strong face and content validity, with participants rating its realism at 7.1/10 and enjoyment at 8.2/10. Performance data showed that expert participants maintained higher success rates and performance across all procedural scenarios, supporting the simulator's construct validity. In the standard approach scenario, novices required 230.91 <span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math> 158.77 s to complete the task and achieved only a 45% success rate compared to experts' 95.48 <span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math> 65.74 s and 80% success rate. In the procedural scenario involving an alignment assistance device, where needle insertion was aligned with the ultrasound image plane, novice success rates increased to 75% and the number of attempts decreased from 8.95 <span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math> 6.69 to 2.75 <span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math> 2.67, narrowing the performance gap with experts. These findings highlight the simulator's potential not only as an effective training tool but also as a platform for the objective evaluation of different procedural techniques.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"13 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.70040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensemble Machine Learning Approaches for Automated Fungal Keratitis Diagnosis Using In Vivo Confocal Microscopy Images 使用体内共聚焦显微镜图像自动诊断真菌性角膜炎的集成机器学习方法。
IF 3.3
Healthcare Technology Letters Pub Date : 2025-12-19 DOI: 10.1049/htl2.70051
Sowmya Kamath S., Shikha Reji, Vaibhava Lakshmi, Supreetha R., Pratiksha Gawas, Veena Mayya, Manali Hazarika
{"title":"Ensemble Machine Learning Approaches for Automated Fungal Keratitis Diagnosis Using In Vivo Confocal Microscopy Images","authors":"Sowmya Kamath S.,&nbsp;Shikha Reji,&nbsp;Vaibhava Lakshmi,&nbsp;Supreetha R.,&nbsp;Pratiksha Gawas,&nbsp;Veena Mayya,&nbsp;Manali Hazarika","doi":"10.1049/htl2.70051","DOIUrl":"10.1049/htl2.70051","url":null,"abstract":"<p>Fungal keratitis (FK) is a severe ocular infection that can lead to significant vision problems or blindness if not diagnosed and treated promptly. Early and accurate detection of FK is essential for effective management. Traditional diagnostic methods are often time-consuming and require specialized laboratory resources. Recently, advances in artificial intelligence and computer vision have enabled automated diagnosis of FK using slit-lamp images. In this article, a comprehensive evaluation of state-of-the-art techniques adopted for classifying FK using in vivo confocal microscopy (IVCM) images is presented. Detailed experiments and performance evaluation of various machine learning models are systematically performed, with a focus on evaluating the effect of diverse techniques for image processing, data augmentation, hyperparameters and model finetuning to assess each model's strengths and limitations. Experiments revealed that applying green channel preprocessing with a 12-feature set achieved 99% accuracy using Random Forest, highlighting its effectiveness in FK detection, while complex techniques like histogram modelling reduced accuracy to 64%. Robust models like AdaBoost and RUSBoost maintained high F1-scores, demonstrating adaptability to imbalanced medical datasets, and to real-world clinical scenarios.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12717025/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145805804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRISM: Past-Regularized Iterative Self-Distillation With Momentum for Polyp Segmentation PRISM:带动量的过去正则化迭代自蒸馏用于多边形分割。
IF 3.3
Healthcare Technology Letters Pub Date : 2025-12-16 DOI: 10.1049/htl2.70050
Tugberk Erol, Tuba Caglikantar, Duygu Sarikaya
{"title":"PRISM: Past-Regularized Iterative Self-Distillation With Momentum for Polyp Segmentation","authors":"Tugberk Erol,&nbsp;Tuba Caglikantar,&nbsp;Duygu Sarikaya","doi":"10.1049/htl2.70050","DOIUrl":"10.1049/htl2.70050","url":null,"abstract":"<p>Polyps are abnormal tissue growths in the colon that may develop into colorectal cancer if left undetected. Accurate segmentation in medical imaging is essential for early diagnosis and treatment. Although deep learning has greatly improved polyp segmentation, its dependence on large annotated datasets and substantial computational resources hampers generalization across diverse clinical settings. To overcome these challenges, we propose PRISM, a momentum-based self-distillation method that improves segmentation performance without introducing additional inference cost. Instead of storing or reusing past predictions, PRISM constructs a temporally smoothed teacher model by applying an exponential moving average (EMA) to the model's weights throughout training. This momentum-based teacher provides stable and adaptive supervision signals that co-evolve with the student model. We evaluate PRISM on colonoscopy datasets collected from five distinct medical centres and validate its generalization on an unseen independent dataset. PRISM achieves a Dice score of 0.81 and an IoU of 0.75, outperforming baseline and conventional self-distillation methods. Ablation studies confirm the effectiveness of the EMA-based teacher model in improving segmentation accuracy. PRISM offers a computationally efficient and generalizable solution for polyp segmentation tasks. The code is available at: https://github.com/TugberkErol/PRISM.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12706544/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145775856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Markerless Tracking of Robotic Surgical Instruments With Head Mounted Display for Augmented Reality Applications 用于增强现实应用的头戴式显示器的机器人手术器械无标记跟踪。
IF 3.3
Healthcare Technology Letters Pub Date : 2025-12-16 DOI: 10.1049/htl2.70044
Nicholas Greene, Aoqi Long, Yonghao Long, Zheng Han, Qi Dou, Peter Kazanzides
{"title":"Markerless Tracking of Robotic Surgical Instruments With Head Mounted Display for Augmented Reality Applications","authors":"Nicholas Greene,&nbsp;Aoqi Long,&nbsp;Yonghao Long,&nbsp;Zheng Han,&nbsp;Qi Dou,&nbsp;Peter Kazanzides","doi":"10.1049/htl2.70044","DOIUrl":"10.1049/htl2.70044","url":null,"abstract":"<p>In robotic-assisted laparoscopic surgery, an assistant surgeon stands at the bedside assisting the intervention, while the surgeon sits at the console teleoperating the robot. Tasks for the assistant include navigating new instruments into the surgeon's field-of-view and passing in or retracting materials from the body using hand-held tools. We previously developed <i>ARssist</i>, an augmented reality application based on an optical see-through head-mounted display (HMD), to aid the assistant. Localization of the HMD with respect to the robot was achieved via the attachment of markers. In this paper, we propose a novel markerless tracking method for robotic instruments using a HoloLens 2 HMD. We first run off-the-shelf YOLOv11 and SAMURAI (an adaptation of Segment Anything 2) networks to detect instrument primitives (shaft lines and keypoints). We then recover full 6D poses via a geometrically interpretable pipeline combining perspective-n-point (PnP) and a multi-view least-squares optimization. We experimentally compare the markerless tracking accuracy to a baseline marker-based tracking solution, and show similar instrument tip accuracy. This suggests that the markerless method is an acceptable substitute to marker-based tracking for this augmented reality application, while avoiding workflow issues with sterilizing and attaching markers.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12706539/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145775777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Tool Detection in Laparoscopic Datasets for Surgical Training in Low-Resource Settings 低资源环境下用于外科训练的腹腔镜数据集的实时工具检测
IF 3.3
Healthcare Technology Letters Pub Date : 2025-12-10 DOI: 10.1049/htl2.70045
Omar Choudhry, Sharib Ali, Chandra Shekhar Biyani, Dominic Jones
{"title":"Real-Time Tool Detection in Laparoscopic Datasets for Surgical Training in Low-Resource Settings","authors":"Omar Choudhry,&nbsp;Sharib Ali,&nbsp;Chandra Shekhar Biyani,&nbsp;Dominic Jones","doi":"10.1049/htl2.70045","DOIUrl":"10.1049/htl2.70045","url":null,"abstract":"&lt;p&gt;In low-resource settings, there is a critical need for skilled surgeons. Alternative training processes that include computer-assisted surgical skill evaluation are essential to address this gap. Using tool detection, surgical videos can be leveraged to derive insights into surgical skill assessment. However, state-of-the-art laparoscopic tool detection methods usually have more complex architectures tailored for in vivo data, which suffer from challenges such as smoke, occlusion, bleeding, etc., which are absent from in vitro training contexts. Thus, this paper tests multiple anchor-based and anchor-free, convolution- and transformer-based, traditional (non-surgical domain-specific) computer vision deep learning state-of-the-art models. With various hardware configurations on a newly curated in-house laparoscopic box-trainer dataset, we emphasise real-time performance on low-cost embedded devices. Overall, the anchor-free YOLOv8-X model was the most accurate, achieving &lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;msub&gt;\u0000 &lt;mi&gt;mAP&lt;/mi&gt;\u0000 &lt;mn&gt;50&lt;/mn&gt;\u0000 &lt;/msub&gt;\u0000 &lt;annotation&gt;${rm mAP}_{50}$&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt; of 99.5% and &lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;msub&gt;\u0000 &lt;mi&gt;mAP&lt;/mi&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mn&gt;50&lt;/mn&gt;\u0000 &lt;mo&gt;:&lt;/mo&gt;\u0000 &lt;mn&gt;95&lt;/mn&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;/msub&gt;\u0000 &lt;annotation&gt;${rm mAP}_{50:95}$&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt; of 96.6% with an inference time of 23.5 ms/&lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mo&gt;≈&lt;/mo&gt;\u0000 &lt;annotation&gt;$approx$&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt;42.6 FPS on an NVIDIA Jetson Orin Nano 8GB (comparable low-cost hardware which could be expected to run real-time skill assessment methods for surgical training boot camps in a resource-constrained environment). The most efficient model was YOLOv11-N, providing 3.1 ms/&lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mo&gt;≈&lt;/mo&gt;\u0000 &lt;annotation&gt;$approx$&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt;322.6 FPS with a performance difference of +0% &lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;msub&gt;\u0000 &lt;mi&gt;mAP&lt;/mi&gt;\u0000 &lt;mn&gt;50&lt;/mn&gt;\u0000 &lt;/msub&gt;\u0000 &lt;annotation&gt;${rm mAP}_{50}$&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt; and –2.1% &lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;msub&gt;\u0000 &lt;mi&gt;mAP&lt;/mi&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mn&gt;50&lt;/mn&gt;\u0000 &lt;mo&gt;:&lt;/mo&gt;\u0000 &lt;mn&gt;95&lt;/mn&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;/msub&gt;\u0000 &lt;annotation&gt;${rm mAP}_{50:95}$&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt;. T","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.70045","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145739690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving SLAM-Based Navigation in Flexible Ureteroscopy by Kidney Stone and Surgical Tool Segmentation 改进基于slam的肾结石输尿管软镜导航和手术工具分割。
IF 3.3
Healthcare Technology Letters Pub Date : 2025-12-09 DOI: 10.1049/htl2.70038
Laura Oliva-Maza, Florian Steidle, Julian Klodmann, Klaus H. Strobl, Arkadiusz Miernik, Rudolph Triebel
{"title":"Improving SLAM-Based Navigation in Flexible Ureteroscopy by Kidney Stone and Surgical Tool Segmentation","authors":"Laura Oliva-Maza,&nbsp;Florian Steidle,&nbsp;Julian Klodmann,&nbsp;Klaus H. Strobl,&nbsp;Arkadiusz Miernik,&nbsp;Rudolph Triebel","doi":"10.1049/htl2.70038","DOIUrl":"10.1049/htl2.70038","url":null,"abstract":"<p>Flexible ureteroscopy is a widely used surgical procedure for diagnosing and treating various urinary tract conditions, particularly kidney stones. Ensuring the complete extraction of all stones is crucial to prevent recurrence and the need for auxiliary interventions. Visual SLAM-based navigation systems have been proposed to assist surgeons by simultaneously estimating the 3D structure of the kidney and tracking the ureteroscope's tip position. However, most existing solutions assume a completely static environment, which does not account for the intraoperative situation. In this study, we extend the work of Oliva Maza et al. by incorporating real-time visual segmentation of kidney stones and surgical tools using either YOLOv7-E6E and segment anything or YOLO11m-seg. Our method discards pixels corresponding to instruments due to their inherent dynamic nature, while kidney stone pixels are incorporated into the SLAM framework but classified as potentially dynamic map points, allowing for their disappearance. This refinement enhances the robustness and the accuracy of ureteroscope position estimation for surgical navigation. To evaluate our approach, we recorded multiple datasets for both segmentation and ureteroscope pose estimation. Experimental results show an average improvement in ureteroscope pose estimation of 35.4% when using YOLOv7-E6E with SAM, and 52.49% when using YOLO11m-seg.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12687663/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145726495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Few-Shot MM-Former For Surgical Scene Panoptic Segmentation 应用于手术场景全视分割的广义少镜头MM-Former。
IF 3.3
Healthcare Technology Letters Pub Date : 2025-12-09 DOI: 10.1049/htl2.70047
Xiaoyan Zhang, Liming Wu, Zhichen Wang, Jingyi Feng, Yichen Zhu, Ziyu Zhou, Ye Tao, Jiquan Liu, Huilong Duan
{"title":"Generalized Few-Shot MM-Former For Surgical Scene Panoptic Segmentation","authors":"Xiaoyan Zhang,&nbsp;Liming Wu,&nbsp;Zhichen Wang,&nbsp;Jingyi Feng,&nbsp;Yichen Zhu,&nbsp;Ziyu Zhou,&nbsp;Ye Tao,&nbsp;Jiquan Liu,&nbsp;Huilong Duan","doi":"10.1049/htl2.70047","DOIUrl":"10.1049/htl2.70047","url":null,"abstract":"<p>Panoptic segmentation is crucial for surgical scene understanding but remains a significant challenge. This is particularly due to the high cost of annotation, which often results in class imbalance in existing datasets, leading to poor performance on categories with limited samples. To address it, we proposed a generalized few-shot MM-former, which is a three-stage framework: (1) We build surgical image-text pairs from the CholecT50 dataset. Using these data, we fine-tune the stable diffusion model to extract multi-scale, image-text fused representations. (2) We train an Mask2Former-based panoptic segmentation decoder on the base classes with sufficient samples, and use it to transform the representations of each image to a set of mask proposals with category predictions. (3) We propose an N-to-M mask matching method. Given a small set of samples from N novel classes, we extract their features as guidance to match M mask proposals, enabling identification of all novel class objects in a single pass. Specifically, each matched proposal is updated with the most likely novel class, while the others keep original predictions. Finally, all proposals are merged to output the results. On CholecPanSeg, our newly built surgical panoptic dataset, the method achieves outstanding results under limited data, surpassing previous approaches.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12686833/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145726585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monocular Vision-Based Endoscopic Sinus Navigation: A SLAM Driven Approach With CT Integration 基于单眼视觉的内镜鼻窦导航:SLAM驱动与CT整合的方法。
IF 3.3
Healthcare Technology Letters Pub Date : 2025-12-09 DOI: 10.1049/htl2.70046
Roger D. Soberanis-Mukul, Chin Hang Ryan Chan, Ryan Chou, Jan Emily Mangulabnan, Lalithkumar Seenivasan, Xingyu Chen, Mohammad Salehizadeh, S. Swaroop Vedula, Russell H. Taylor, Masaru Ishii, Gregory Hager, Mathias Unberath
{"title":"Monocular Vision-Based Endoscopic Sinus Navigation: A SLAM Driven Approach With CT Integration","authors":"Roger D. Soberanis-Mukul,&nbsp;Chin Hang Ryan Chan,&nbsp;Ryan Chou,&nbsp;Jan Emily Mangulabnan,&nbsp;Lalithkumar Seenivasan,&nbsp;Xingyu Chen,&nbsp;Mohammad Salehizadeh,&nbsp;S. Swaroop Vedula,&nbsp;Russell H. Taylor,&nbsp;Masaru Ishii,&nbsp;Gregory Hager,&nbsp;Mathias Unberath","doi":"10.1049/htl2.70046","DOIUrl":"10.1049/htl2.70046","url":null,"abstract":"<p>Surgical navigation is critical in sinus surgery to enhance the surgeon's spatial awareness and improve precision, particularly around occluded critical structures. While external tracker-based navigation systems exist, vision-based solutions are preferred for being less intrusive and for enabling endoscopic image analysis to assist surgeons. However, monocular endoscopy navigation faces challenges associated with monocular reconstruction and camera pose estimation. This paper presents a proof of concept for monocular vision-based sinus navigation that utilizes only preoperative CT data and the endoscope video stream to navigate the sinus anatomy. We developed a vision-based navigation system that incorporates a SLAM algorithm to estimate the camera pose and reconstruct the 3D surface of the anatomy. Given an initial semi-automated registration, the algorithm maps the SLAM-based trajectories to the CT space while employing the reconstructed point cloud to solve for the scale interactively. The system displays the updates in the CT triplane visualization as SLAM reconstructs the scene and recovers pose information. We tested our system by performing an off-site navigation in ten recorded endoscopic video streaming generated from sequences obtained from eight cadaveric subjects, comparing the vision-based navigation to reference optical tracker pose data and obtaining translation and rotation errors of 3.2 mm and 4.9 degrees, respectively. Additionally, we performed three on-site tests of our system on two different cadaver experiments. Our work evaluates a fully integrated system that closes the loop between image-based reconstruction and CT visualization, and discusses the challenges to address to achieve clinical level surgical navigation.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12686831/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145726535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书