Manuela Paulina Trejo Ramírez, Callum John Thornton, Neil Darren Evans, Michael John Chappell
{"title":"Quantification of finger grasps during activities of daily life using convolutional neural networks: A pilot study","authors":"Manuela Paulina Trejo Ramírez, Callum John Thornton, Neil Darren Evans, Michael John Chappell","doi":"10.1049/htl2.12080","DOIUrl":"10.1049/htl2.12080","url":null,"abstract":"<p>Quantifying finger kinematics can improve the authors’ understanding of finger function and facilitate the design of efficient prosthetic devices while also identifying movement disorders and assessing the impact of rehabilitation interventions. Here, the authors present a study that quantifies grasps depicted in taxonomies during selected Activities of Daily Living (ADL). A single participant held a series of standard objects using specific grasps which were used to train Convolutional Neural Networks (CNN) for each of the four fingers individually. The experiment also recorded hand manipulation of objects during ADL. Each set of ADL finger kinematic data was tested using the trained CNN, which identified and quantified the grasps required to accomplish each task. Certain grasps appeared more often depending on the finger studied, meaning that even though there are physiological interdependencies, fingers have a certain degree of autonomy in performing dexterity tasks. The identified and most frequent grasps agreed with the previously reported findings, but also highlighted that an individual might have specific dexterity needs which may vary with profession and age. The proposed method can be used to identify and quantify key grasps for finger/hand prostheses, to provide a more efficient solution that is practical in their day-to-day tasks.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 5","pages":"259-270"},"PeriodicalIF":2.8,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12080","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139835940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farhan Hasin Saad, Taseef Hasan Farook, Saif Ahmed, Yang Zhao, Zhibin Liao, Johan W. Verjans, James Dudley
{"title":"Facial and mandibular landmark tracking with habitual head posture estimation using linear and fiducial markers","authors":"Farhan Hasin Saad, Taseef Hasan Farook, Saif Ahmed, Yang Zhao, Zhibin Liao, Johan W. Verjans, James Dudley","doi":"10.1049/htl2.12076","DOIUrl":"https://doi.org/10.1049/htl2.12076","url":null,"abstract":"<p>This study compared the accuracy of facial landmark measurements using deep learning-based fiducial marker (FM) and arbitrary width reference (AWR) approaches. It quantitatively analysed mandibular hard and soft tissue lateral excursions and head tilting from consumer camera footage of 37 participants. A custom deep learning system recognised facial landmarks for measuring head tilt and mandibular lateral excursions. Circular fiducial markers (FM) and inter-zygion measurements (AWR) were validated against physical measurements using electrognathography and electronic rulers. Results showed notable differences in lower and mid-face estimations for both FM and AWR compared to physical measurements. The study also demonstrated the comparability of both approaches in assessing lateral movement, though fiducial markers exhibited variability in mid-face and lower face parameter assessments. Regardless of the technique applied, hard tissue movement was typically seen to be 30% less than soft tissue among the participants. Additionally, a significant number of participants consistently displayed a 5 to 10° head tilt.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 1","pages":"21-30"},"PeriodicalIF":2.1,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139744945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kieran J. McMinn, Shelley N. Diewald, Craig Harrison, John B. Cronin, Dana Ye-Lee, Paris Saint Germain
{"title":"Inter- and intra-session variability of compression strain gauge for the adductor groin squeeze test on soccer athletes","authors":"Kieran J. McMinn, Shelley N. Diewald, Craig Harrison, John B. Cronin, Dana Ye-Lee, Paris Saint Germain","doi":"10.1049/htl2.12074","DOIUrl":"10.1049/htl2.12074","url":null,"abstract":"<p>The importance of hip adductor strength for injury prevention and performance benefits is well documented. The purpose of this study was to establish the intra- and inter-day variability of peak force (PF) of a groin squeeze protocol using a custom-designed compression strain gauge device. Sixteen semi-professional soccer players completed three trials over three separate testing occasions with at least 24-h rest between each session. The main findings were that the compression strain gauge was a reliable device for measuring PF within and between days. All intraclass correlations were higher than 0.80 and coefficients of variations were below 10% across the different sessions and trials. Due to the information gained through the compression strain gauge, the higher sampling frequency utilized, portability, and the relatively affordable price, this device offers an effective alternative for measuring maximal strength for hip adduction.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 1","pages":"16-20"},"PeriodicalIF":2.1,"publicationDate":"2024-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139592088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuzhang Liu, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kensaku Mori
{"title":"YOLOv7-RepFPN: Improving real-time performance of laparoscopic tool detection on embedded systems","authors":"Yuzhang Liu, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kensaku Mori","doi":"10.1049/htl2.12072","DOIUrl":"10.1049/htl2.12072","url":null,"abstract":"<p>This study focuses on enhancing the inference speed of laparoscopic tool detection on embedded devices. Laparoscopy, a minimally invasive surgery technique, markedly reduces patient recovery times and postoperative complications. Real-time laparoscopic tool detection helps assisting laparoscopy by providing information for surgical navigation, and its implementation on embedded devices is gaining interest due to the portability, network independence and scalability of the devices. However, embedded devices often face computation resource limitations, potentially hindering inference speed. To mitigate this concern, the work introduces a two-fold modification to the YOLOv7 model: the feature channels and integrate RepBlock is halved, yielding the YOLOv7-RepFPN model. This configuration leads to a significant reduction in computational complexity. Additionally, the focal EIoU (efficient intersection of union) loss function is employed for bounding box regression. Experimental results on an embedded device demonstrate that for frame-by-frame laparoscopic tool detection, the proposed YOLOv7-RepFPN achieved an mAP of 88.2% (with IoU set to 0.5) on a custom dataset based on EndoVis17, and an inference speed of 62.9 FPS. Contrasting with the original YOLOv7, which garnered an 89.3% mAP and 41.8 FPS under identical conditions, the methodology enhances the speed by 21.1 FPS while maintaining detection accuracy. This emphasizes the effectiveness of the work.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"157-166"},"PeriodicalIF":2.1,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12072","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139605454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luyang Zhang, Yuichiro Hayashi, Masahiro Oda, Kensaku Mori
{"title":"Towards better laparoscopic video segmentation: A class-wise contrastive learning approach with multi-scale feature extraction","authors":"Luyang Zhang, Yuichiro Hayashi, Masahiro Oda, Kensaku Mori","doi":"10.1049/htl2.12069","DOIUrl":"10.1049/htl2.12069","url":null,"abstract":"<p>The task of segmentation is integral to computer-aided surgery systems. Given the privacy concerns associated with medical data, collecting a large amount of annotated data for training is challenging. Unsupervised learning techniques, such as contrastive learning, have shown powerful capabilities in learning image-level representations from unlabelled data. This study leverages classification labels to enhance the accuracy of the segmentation model trained on limited annotated data. The method uses a multi-scale projection head to extract image features at various scales. The partitioning method for positive sample pairs is then improved to perform contrastive learning on the extracted features at each scale to effectively represent the differences between positive and negative samples in contrastive learning. Furthermore, the model is trained simultaneously with both segmentation labels and classification labels. This enables the model to extract features more effectively from each segmentation target class and further accelerates the convergence speed. The method was validated using the publicly available CholecSeg8k dataset for comprehensive abdominal cavity surgical segmentation. Compared to select existing methods, the proposed approach significantly enhances segmentation performance, even with a small labelled subset (1–10%) of the dataset, showcasing a superior intersection over union (IoU) score.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"126-136"},"PeriodicalIF":2.1,"publicationDate":"2024-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139531730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhou Zheng, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kensaku Mori
{"title":"Revisiting instrument segmentation: Learning from decentralized surgical sequences with various imperfect annotations","authors":"Zhou Zheng, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kensaku Mori","doi":"10.1049/htl2.12068","DOIUrl":"10.1049/htl2.12068","url":null,"abstract":"<p>This paper focuses on a new and challenging problem related to instrument segmentation. This paper aims to learn a generalizable model from distributed datasets with various imperfect annotations. Collecting a large-scale dataset for centralized learning is usually impeded due to data silos and privacy issues. Besides, local clients, such as hospitals or medical institutes, may hold datasets with diverse and imperfect annotations. These datasets can include scarce annotations (many samples are unlabelled), noisy labels prone to errors, and scribble annotations with less precision. Federated learning (FL) has emerged as an attractive paradigm for developing global models with these locally distributed datasets. However, its potential in instrument segmentation has yet to be fully investigated. Moreover, the problem of learning from various imperfect annotations in an FL setup is rarely studied, even though it presents a more practical and beneficial scenario. This work rethinks instrument segmentation in such a setting and propose a practical FL framework for this issue. Notably, this approach surpassed centralized learning under various imperfect annotation settings. This method established a foundational benchmark, and future work can build upon it by considering each client owning various annotations and aligning closer with real-world complexities.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"146-156"},"PeriodicalIF":2.1,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12068","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139445412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roy Eagleson, Denis Kikinov, Liam Bilbie, Sandrine de Ribaupierre
{"title":"Clinical trainee performance on task-based AR/VR-guided surgical simulation is correlated with their 3D image spatial reasoning scores","authors":"Roy Eagleson, Denis Kikinov, Liam Bilbie, Sandrine de Ribaupierre","doi":"10.1049/htl2.12066","DOIUrl":"10.1049/htl2.12066","url":null,"abstract":"<p>This paper describes a methodology for the assessment of training simulator-based computer-assisted intervention skills on an AR/VR-guided procedure making use of CT axial slice views for a neurosurgical procedure: external ventricular drain (EVD) placement. The task requires that trainees scroll through a stack of axial slices and form a mental representation of the anatomical structures in order to subsequently target the ventricles to insert an EVD. The process of observing the 2D CT image slices in order to build a mental representation of the 3D anatomical structures is the skill being taught, along with the cognitive control of the subsequent targeting, by planned motor actions, of the EVD tip to the ventricular system to drain cerebrospinal fluid (CSF). Convergence is established towards the validity of this assessment methodology by examining two objective measures of spatial reasoning, along with one subjective expert ranking methodology, and comparing these to AR/VR guidance. These measures have two components: the speed and accuracy of the targeting, which are used to derive the performance metric. Results of these correlations are presented for a population of PGY1 residents attending the Canadian Neurosurgical “Rookie Bootcamp” in 2019.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"117-125"},"PeriodicalIF":2.1,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12066","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139446033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Israr Ahmad, Javed Rashid, Muhammad Faheem, Arslan Akram, Nafees Ahmad Khan, Riaz ul Amin
{"title":"Autism spectrum disorder detection using facial images: A performance comparison of pretrained convolutional neural networks","authors":"Israr Ahmad, Javed Rashid, Muhammad Faheem, Arslan Akram, Nafees Ahmad Khan, Riaz ul Amin","doi":"10.1049/htl2.12073","DOIUrl":"10.1049/htl2.12073","url":null,"abstract":"<p>Autism spectrum disorder (ASD) is a complex psychological syndrome characterized by persistent difficulties in social interaction, restricted behaviours, speech, and nonverbal communication. The impacts of this disorder and the severity of symptoms vary from person to person. In most cases, symptoms of ASD appear at the age of 2 to 5 and continue throughout adolescence and into adulthood. While this disorder cannot be cured completely, studies have shown that early detection of this syndrome can assist in maintaining the behavioural and psychological development of children. Experts are currently studying various machine learning methods, particularly convolutional neural networks, to expedite the screening process. Convolutional neural networks are considered promising frameworks for the diagnosis of ASD. This study employs different pre-trained convolutional neural networks such as ResNet34, ResNet50, AlexNet, MobileNetV2, VGG16, and VGG19 to diagnose ASD and compared their performance. Transfer learning was applied to every model included in the study to achieve higher results than the initial models. The proposed ResNet50 model achieved the highest accuracy, 92%, compared to other transfer learning models. The proposed method also outperformed the state-of-the-art models in terms of accuracy and computational cost.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 4","pages":"227-239"},"PeriodicalIF":2.8,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12073","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139447473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Niki Najafi, Miranda Addie, Sarkis Meterissian, Marta Kersten-Oertel
{"title":"Breamy: An augmented reality mHealth prototype for surgical decision-making in breast cancer","authors":"Niki Najafi, Miranda Addie, Sarkis Meterissian, Marta Kersten-Oertel","doi":"10.1049/htl2.12071","DOIUrl":"https://doi.org/10.1049/htl2.12071","url":null,"abstract":"<p>Breast cancer is one of the most prevalent forms of cancer, affecting approximately one in eight women during their lifetime. Deciding on breast cancer treatment, which includes the choice between surgical options, frequently demands prompt decision-making within an 8-week timeframe. However, many women lack the necessary knowledge and preparation for making informed decisions. Anxiety and unsatisfactory outcomes can result from inadequate decision-making processes, leading to decisional regret and revision surgeries. Shared decision-making and personalized decision aids have shown positive effects on patient satisfaction and treatment outcomes. Here, Breamy, a prototype mobile health application that utilizes augmented reality technology to assist breast cancer patients in making more informed decisions is introduced. Breamy provides 3D visualizations of different surgical procedures, aiming to improve confidence in surgical decision-making, reduce decisional regret, and enhance patient well-being after surgery. To determine the perception of the usefulness of Breamy, data was collected from 166 participants through an online survey. The results suggest that Breamy has the potential to reduce patients' anxiety levels and assist them in decision-making.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"137-145"},"PeriodicalIF":2.1,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Panagiotis Tsakonas, Evans Neil, Joseph Hardwicke, Michael J. Chappell
{"title":"Parameter estimation of a model describing the human fingers","authors":"Panagiotis Tsakonas, Evans Neil, Joseph Hardwicke, Michael J. Chappell","doi":"10.1049/htl2.12070","DOIUrl":"10.1049/htl2.12070","url":null,"abstract":"<p>The goal of this paper is twofold: firstly, to provide a novel mathematical model that describes the kinematic chain of motion of the human fingers based on Lagrangian mechanics with four degrees of freedom and secondly, to estimate the model parameters using data from able-bodied individuals. In the literature there are a variety of mathematical models that have been developed to describe the motion of the human finger. These models offer little to no information on the underlying mechanisms or corresponding equations of motion. Furthermore, these models do not provide information as to how they scale with different anthropometries. The data used here is generated using an experimental procedure that considers the free response motion of each finger segment with data captured via a motion capture system. The angular data collected are then filtered and fitted to a linear second-order differential approximation of the equations of motion. The results of the study show that the free response motion of the segments is underdamped across flexion/extension and ad/abduction.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 1","pages":"1-15"},"PeriodicalIF":2.1,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12070","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139156760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}