{"title":"Contactless and short-range vital signs detection with doppler radar millimetre-wave (76–81 GHz) sensing firmware","authors":"Pi-Yun Chen, Hsu-Yung Lin, Zi-Heng Zhong, Neng-Sheng Pai, Chien-Ming Li, Chia-Hung Lin","doi":"10.1049/htl2.12075","DOIUrl":"10.1049/htl2.12075","url":null,"abstract":"<p>Vital signs such as heart rate (HR) and respiration rate (RR) are essential physiological parameters that are routinely used to monitor human health and bodily functions. They can be continuously monitored through contact or contactless measurements performed in the home or a hospital. In this study, a contactless Doppler radar W-band sensing system was used for short-range, contactless vital sign estimation. Frequency-modulated continuous wave (FMCW) measurements were performed to reduce the influence of a patient's micromotion. Sensing software was developed that can process the received chirps to filter and extract heartbeat and breathing rhythm signals. The proposed contactless sensing system eliminates the need for the contact electrodes, electric patches, photoelectric sensors, and conductive wires used in typical physiological sensing methods. The system operates at 76–81 GHz in FMCW mode and can detect objects on the basis of changes in frequency and phase. The obtained signals are used to precisely monitor a patient's HR and RR with minimal noise interference. In a laboratory setting, the heartbeats and breathing rhythm signals of healthy young participants were measured, and their HR and RR were estimated through frequency- and time-domain analyses. The experimental results confirmed the feasibility of the proposed W-band mm-wave radar for contactless and short-range continuous detection of human vital signs.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"427-436"},"PeriodicalIF":2.8,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12075","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140426410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emma Schwager, Mohsen Nabian, Xinggang Liu, Ting Feng, Robin French, Pam Amelung, Louis Atallah, Omar Badawi
{"title":"Machine learning modelling for predicting the utilization of invasive and non-invasive ventilation throughout the ICU duration","authors":"Emma Schwager, Mohsen Nabian, Xinggang Liu, Ting Feng, Robin French, Pam Amelung, Louis Atallah, Omar Badawi","doi":"10.1049/htl2.12081","DOIUrl":"10.1049/htl2.12081","url":null,"abstract":"<p>The goal of this work is to develop a Machine Learning model to predict the need for both invasive and non-invasive mechanical ventilation in intensive care unit (ICU) patients. Using the Philips eICU Research Institute (ERI) database, 2.6 million ICU patient data from 2010 to 2019 were analyzed. This data was randomly split into training (63%), validation (27%), and test (10%) sets. Additionally, an external test set from a single hospital from the ERI database was employed to assess the model's generalizability. Model performance was determined by comparing the model probability predictions with the actual incidence of ventilation use, either invasive or non-invasive. The model demonstrated a prediction performance with an AUC of 0.921 for overall ventilation, 0.937 for invasive, and 0.827 for non-invasive. Factors such as high Glasgow Coma Scores, younger age, lower BMI, and lower PaCO2 were highlighted as indicators of a lower likelihood for the need for ventilation. The model can serve as a retrospective benchmarking tool for hospitals to assess ICU performance concerning mechanical ventilation necessity. It also enables analysis of ventilation strategy trends and risk-adjusted comparisons, with potential for future testing as a clinical decision tool for optimizing ICU ventilation management.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 4","pages":"252-257"},"PeriodicalIF":2.8,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140448519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance evaluation in cataract surgery with an ensemble of 2D–3D convolutional neural networks","authors":"Ummey Tanin, Adrienne Duimering, Christine Law, Jessica Ruzicki, Gabriela Luna, Matthew Holden","doi":"10.1049/htl2.12078","DOIUrl":"10.1049/htl2.12078","url":null,"abstract":"<p>An important part of surgical training in ophthalmology is understanding how to proficiently perform cataract surgery. Operating skill in cataract surgery is typically assessed by real-time or video-based expert review using a rating scale. This is time-consuming, subjective and labour-intensive. A typical trainee graduates with over 100 complete surgeries, each of which requires review by the surgical educators. Due to the consistently repetitive nature of this task, it lends itself well to machine learning-based evaluation. Recent studies utilize deep learning models trained on tool motion trajectories obtained using additional equipment or robotic systems. However, the process of tool recognition by extracting frames from the videos to perform phase recognition followed by skill assessment is exhaustive. This project proposes a deep learning model for skill evaluation using raw surgery videos that is cost-effective and end-to-end trainable. An advanced ensemble of convolutional neural network models is leveraged to model technical skills in cataract surgeries and is evaluated using a large dataset comprising almost 200 surgical trials. The highest accuracy of 0.8494 is observed on the phacoemulsification step data. Our model yielded an average accuracy of 0.8200 and an average AUC score of 0.8800 for all four phase datasets of cataract surgery proving its robustness against different data. The proposed ensemble model with 2D and 3D convolutional neural networks demonstrated a promising result without using tool motion trajectories to evaluate surgery expertise.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"189-195"},"PeriodicalIF":2.1,"publicationDate":"2024-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12078","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139960625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manuela Paulina Trejo Ramírez, Callum John Thornton, Neil Darren Evans, Michael John Chappell
{"title":"Quantification of finger grasps during activities of daily life using convolutional neural networks: A pilot study","authors":"Manuela Paulina Trejo Ramírez, Callum John Thornton, Neil Darren Evans, Michael John Chappell","doi":"10.1049/htl2.12080","DOIUrl":"10.1049/htl2.12080","url":null,"abstract":"<p>Quantifying finger kinematics can improve the authors’ understanding of finger function and facilitate the design of efficient prosthetic devices while also identifying movement disorders and assessing the impact of rehabilitation interventions. Here, the authors present a study that quantifies grasps depicted in taxonomies during selected Activities of Daily Living (ADL). A single participant held a series of standard objects using specific grasps which were used to train Convolutional Neural Networks (CNN) for each of the four fingers individually. The experiment also recorded hand manipulation of objects during ADL. Each set of ADL finger kinematic data was tested using the trained CNN, which identified and quantified the grasps required to accomplish each task. Certain grasps appeared more often depending on the finger studied, meaning that even though there are physiological interdependencies, fingers have a certain degree of autonomy in performing dexterity tasks. The identified and most frequent grasps agreed with the previously reported findings, but also highlighted that an individual might have specific dexterity needs which may vary with profession and age. The proposed method can be used to identify and quantify key grasps for finger/hand prostheses, to provide a more efficient solution that is practical in their day-to-day tasks.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 5","pages":"259-270"},"PeriodicalIF":2.8,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12080","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139835940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farhan Hasin Saad, Taseef Hasan Farook, Saif Ahmed, Yang Zhao, Zhibin Liao, Johan W. Verjans, James Dudley
{"title":"Facial and mandibular landmark tracking with habitual head posture estimation using linear and fiducial markers","authors":"Farhan Hasin Saad, Taseef Hasan Farook, Saif Ahmed, Yang Zhao, Zhibin Liao, Johan W. Verjans, James Dudley","doi":"10.1049/htl2.12076","DOIUrl":"https://doi.org/10.1049/htl2.12076","url":null,"abstract":"<p>This study compared the accuracy of facial landmark measurements using deep learning-based fiducial marker (FM) and arbitrary width reference (AWR) approaches. It quantitatively analysed mandibular hard and soft tissue lateral excursions and head tilting from consumer camera footage of 37 participants. A custom deep learning system recognised facial landmarks for measuring head tilt and mandibular lateral excursions. Circular fiducial markers (FM) and inter-zygion measurements (AWR) were validated against physical measurements using electrognathography and electronic rulers. Results showed notable differences in lower and mid-face estimations for both FM and AWR compared to physical measurements. The study also demonstrated the comparability of both approaches in assessing lateral movement, though fiducial markers exhibited variability in mid-face and lower face parameter assessments. Regardless of the technique applied, hard tissue movement was typically seen to be 30% less than soft tissue among the participants. Additionally, a significant number of participants consistently displayed a 5 to 10° head tilt.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 1","pages":"21-30"},"PeriodicalIF":2.1,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139744945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kieran J. McMinn, Shelley N. Diewald, Craig Harrison, John B. Cronin, Dana Ye-Lee, Paris Saint Germain
{"title":"Inter- and intra-session variability of compression strain gauge for the adductor groin squeeze test on soccer athletes","authors":"Kieran J. McMinn, Shelley N. Diewald, Craig Harrison, John B. Cronin, Dana Ye-Lee, Paris Saint Germain","doi":"10.1049/htl2.12074","DOIUrl":"10.1049/htl2.12074","url":null,"abstract":"<p>The importance of hip adductor strength for injury prevention and performance benefits is well documented. The purpose of this study was to establish the intra- and inter-day variability of peak force (PF) of a groin squeeze protocol using a custom-designed compression strain gauge device. Sixteen semi-professional soccer players completed three trials over three separate testing occasions with at least 24-h rest between each session. The main findings were that the compression strain gauge was a reliable device for measuring PF within and between days. All intraclass correlations were higher than 0.80 and coefficients of variations were below 10% across the different sessions and trials. Due to the information gained through the compression strain gauge, the higher sampling frequency utilized, portability, and the relatively affordable price, this device offers an effective alternative for measuring maximal strength for hip adduction.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 1","pages":"16-20"},"PeriodicalIF":2.1,"publicationDate":"2024-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139592088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuzhang Liu, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kensaku Mori
{"title":"YOLOv7-RepFPN: Improving real-time performance of laparoscopic tool detection on embedded systems","authors":"Yuzhang Liu, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kensaku Mori","doi":"10.1049/htl2.12072","DOIUrl":"10.1049/htl2.12072","url":null,"abstract":"<p>This study focuses on enhancing the inference speed of laparoscopic tool detection on embedded devices. Laparoscopy, a minimally invasive surgery technique, markedly reduces patient recovery times and postoperative complications. Real-time laparoscopic tool detection helps assisting laparoscopy by providing information for surgical navigation, and its implementation on embedded devices is gaining interest due to the portability, network independence and scalability of the devices. However, embedded devices often face computation resource limitations, potentially hindering inference speed. To mitigate this concern, the work introduces a two-fold modification to the YOLOv7 model: the feature channels and integrate RepBlock is halved, yielding the YOLOv7-RepFPN model. This configuration leads to a significant reduction in computational complexity. Additionally, the focal EIoU (efficient intersection of union) loss function is employed for bounding box regression. Experimental results on an embedded device demonstrate that for frame-by-frame laparoscopic tool detection, the proposed YOLOv7-RepFPN achieved an mAP of 88.2% (with IoU set to 0.5) on a custom dataset based on EndoVis17, and an inference speed of 62.9 FPS. Contrasting with the original YOLOv7, which garnered an 89.3% mAP and 41.8 FPS under identical conditions, the methodology enhances the speed by 21.1 FPS while maintaining detection accuracy. This emphasizes the effectiveness of the work.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"157-166"},"PeriodicalIF":2.1,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12072","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139605454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luyang Zhang, Yuichiro Hayashi, Masahiro Oda, Kensaku Mori
{"title":"Towards better laparoscopic video segmentation: A class-wise contrastive learning approach with multi-scale feature extraction","authors":"Luyang Zhang, Yuichiro Hayashi, Masahiro Oda, Kensaku Mori","doi":"10.1049/htl2.12069","DOIUrl":"10.1049/htl2.12069","url":null,"abstract":"<p>The task of segmentation is integral to computer-aided surgery systems. Given the privacy concerns associated with medical data, collecting a large amount of annotated data for training is challenging. Unsupervised learning techniques, such as contrastive learning, have shown powerful capabilities in learning image-level representations from unlabelled data. This study leverages classification labels to enhance the accuracy of the segmentation model trained on limited annotated data. The method uses a multi-scale projection head to extract image features at various scales. The partitioning method for positive sample pairs is then improved to perform contrastive learning on the extracted features at each scale to effectively represent the differences between positive and negative samples in contrastive learning. Furthermore, the model is trained simultaneously with both segmentation labels and classification labels. This enables the model to extract features more effectively from each segmentation target class and further accelerates the convergence speed. The method was validated using the publicly available CholecSeg8k dataset for comprehensive abdominal cavity surgical segmentation. Compared to select existing methods, the proposed approach significantly enhances segmentation performance, even with a small labelled subset (1–10%) of the dataset, showcasing a superior intersection over union (IoU) score.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"126-136"},"PeriodicalIF":2.1,"publicationDate":"2024-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139531730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhou Zheng, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kensaku Mori
{"title":"Revisiting instrument segmentation: Learning from decentralized surgical sequences with various imperfect annotations","authors":"Zhou Zheng, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kensaku Mori","doi":"10.1049/htl2.12068","DOIUrl":"10.1049/htl2.12068","url":null,"abstract":"<p>This paper focuses on a new and challenging problem related to instrument segmentation. This paper aims to learn a generalizable model from distributed datasets with various imperfect annotations. Collecting a large-scale dataset for centralized learning is usually impeded due to data silos and privacy issues. Besides, local clients, such as hospitals or medical institutes, may hold datasets with diverse and imperfect annotations. These datasets can include scarce annotations (many samples are unlabelled), noisy labels prone to errors, and scribble annotations with less precision. Federated learning (FL) has emerged as an attractive paradigm for developing global models with these locally distributed datasets. However, its potential in instrument segmentation has yet to be fully investigated. Moreover, the problem of learning from various imperfect annotations in an FL setup is rarely studied, even though it presents a more practical and beneficial scenario. This work rethinks instrument segmentation in such a setting and propose a practical FL framework for this issue. Notably, this approach surpassed centralized learning under various imperfect annotation settings. This method established a foundational benchmark, and future work can build upon it by considering each client owning various annotations and aligning closer with real-world complexities.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"146-156"},"PeriodicalIF":2.1,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12068","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139445412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roy Eagleson, Denis Kikinov, Liam Bilbie, Sandrine de Ribaupierre
{"title":"Clinical trainee performance on task-based AR/VR-guided surgical simulation is correlated with their 3D image spatial reasoning scores","authors":"Roy Eagleson, Denis Kikinov, Liam Bilbie, Sandrine de Ribaupierre","doi":"10.1049/htl2.12066","DOIUrl":"10.1049/htl2.12066","url":null,"abstract":"<p>This paper describes a methodology for the assessment of training simulator-based computer-assisted intervention skills on an AR/VR-guided procedure making use of CT axial slice views for a neurosurgical procedure: external ventricular drain (EVD) placement. The task requires that trainees scroll through a stack of axial slices and form a mental representation of the anatomical structures in order to subsequently target the ventricles to insert an EVD. The process of observing the 2D CT image slices in order to build a mental representation of the 3D anatomical structures is the skill being taught, along with the cognitive control of the subsequent targeting, by planned motor actions, of the EVD tip to the ventricular system to drain cerebrospinal fluid (CSF). Convergence is established towards the validity of this assessment methodology by examining two objective measures of spatial reasoning, along with one subjective expert ranking methodology, and comparing these to AR/VR guidance. These measures have two components: the speed and accuracy of the targeting, which are used to derive the performance metric. Results of these correlations are presented for a population of PGY1 residents attending the Canadian Neurosurgical “Rookie Bootcamp” in 2019.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"117-125"},"PeriodicalIF":2.1,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12066","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139446033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}