Shaiful Ajam Opee, Arifa Akter Eva, Ahmed Taj Noor, Sayem Mustak Hasan, M. F. Mridha
{"title":"ELW-CNN: An extremely lightweight convolutional neural network for enhancing interoperability in colon and lung cancer identification using explainable AI","authors":"Shaiful Ajam Opee, Arifa Akter Eva, Ahmed Taj Noor, Sayem Mustak Hasan, M. F. Mridha","doi":"10.1049/htl2.12122","DOIUrl":"10.1049/htl2.12122","url":null,"abstract":"<p>Cancer is a condition in which cells in the body grow uncontrollably, often forming tumours and potentially spreading to various areas of the body. Cancer is a hazardous medical case in medical history analysis. Every year, many people die of cancer at an early stage. Therefore, it is necessary to accurately and early identify cancer to effectively treat and save human lives. However, various machine and deep learning models are effective for cancer identification. Therefore, the effectiveness of these efforts is limited by the small dataset size, poor data quality, interclass changes between lung squamous cell carcinoma and adenocarcinoma, difficulties with mobile device deployment, and lack of image and individual-level accuracy tests. To overcome these difficulties, this study proposed an extremely lightweight model using a convolutional neural network that achieved 98.16% accuracy for a large lung and colon dataset and individually achieved 99.02% for lung cancer and 99.40% for colon cancer. The proposed lightweight model used only 70 thousand parameters, which is highly effective for real-time solutions. Explainability methods such as Grad-CAM and symmetric explanation highlight specific regions of input data that affect the decision of the proposed model, helping to identify potential challenges. The proposed models will aid medical professionals in developing an automated and accurate approach for detecting various types of colon and lung cancer.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11751720/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cristian A. Linte, Ziv Yaniv, Elvis Chen, Simon Drouin, Marta Kersten-Oertel, Jonathan McLeod, Duygu Sarikaya, Jiangliu Wang
{"title":"Guest editorial: Papers from the 18th joint workshop on Augmented Environments for Computer Assisted Interventions (AE-CAI) at MICCAI 2024: Guest editors’ foreword","authors":"Cristian A. Linte, Ziv Yaniv, Elvis Chen, Simon Drouin, Marta Kersten-Oertel, Jonathan McLeod, Duygu Sarikaya, Jiangliu Wang","doi":"10.1049/htl2.70000","DOIUrl":"10.1049/htl2.70000","url":null,"abstract":"<p>Welcome to this special issue of Wiley's IET Healthcare Technology Letters (HTL) dedicated to the 2024 edition of the augmented environments for computer-assisted interventions (AE-CAI), computer assisted and robotic endoscopy (CARE), and context-aware operating theatres (OR 2.0) joint workshop. We are pleased to present the proceedings of this exciting scientific gathering held in conjunction with the medical image computing and computer-assisted interventions (MICCAI) conference on October 6th, 2024 in Marrakech, Morocco.</p><p>Computer-assisted interventions (CAI) is a field of research and practice, where medical interventions are supported by computer-based tools and methodologies. CAI systems enable more precise, safer, and less invasive interventional treatments by providing enhanced planning, real-time visualization, instrument guidance and navigation, as well as situation awareness and cognition. These research domains have been motivated by the development of medical imaging and its evolution from being primarily a diagnostic modality towards its use as a therapeutic and interventional aid, driven by the need to streamline the diagnostic and therapeutic processes via minimally invasive visualization and therapy. To promote this field of research, our workshop seeks to showcase papers that disseminate novel theoretical algorithms, technical implementations, and development and validation of integrated hardware and software systems in the context of their dedicated clinical applications. The workshop attracts researchers in computer science, biomedical engineering, computer vision, robotics, and medical imaging.</p><p>The 2024 edition of AE-CAI | CARE | OR 2.0 was a joint event between the series of MICCAI-affiliated AE-CAI workshops founded in 2006 and now on its 18th edition, the CARE workshop series, now on its 11th edition, and the OR 2.0 workshop now on its 6th edition. This year's edition of the workshop featured 24 accepted submissions and reached more than 70 registrants, not including the members of the organizing and program committees, making AE-CAI | CARE | OR 2.0 one of the best received and best attended workshops with more than a decade-long standing tradition at MICCAI.</p><p>On the above note of “more than a decade-long standing tradition at MICCAI”, it turns out that AE-CAI, albeit several variations in name, has been running for a while now in some shape or form and is, in fact, MICCAI's longest-standing workshop! Let us start with a historical note for those less familiar with our journey!</p><p>It all started in 2006 in Copenhagen under the name of AMI-ARCS, which pointed to something along the lines of augmented medical imaging and augmented reality for computer-assisted surgery, and it ran under that name for three more years, in Brisbane (2007), New York (2008), and London (2009). The 2010 edition (Beijing) was co-hosted with the MIAR (medical imaging and augmented reality) conference. The workshop was then rebr","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11744466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zahra Asadi, Joshua Pardillo Castillo, Mehrdad Asadi, David S. Sinclair, Marta Kersten-Oertel
{"title":"iSurgARy: A mobile augmented reality solution for ventriculostomy in resource-limited settings","authors":"Zahra Asadi, Joshua Pardillo Castillo, Mehrdad Asadi, David S. Sinclair, Marta Kersten-Oertel","doi":"10.1049/htl2.12118","DOIUrl":"10.1049/htl2.12118","url":null,"abstract":"<p>Global disparities in neurosurgical care necessitate innovations addressing affordability and accuracy, particularly for critical procedures like ventriculostomy. This intervention, vital for managing life-threatening intracranial pressure increases, is associated with catheter misplacement rates exceeding 30% when using a freehand technique. Such misplacements hold severe consequences including haemorrhage, infection, prolonged hospital stays, and even morbidity and mortality. To address this issue, a novel, stand-alone mobile-based augmented reality system (iSurgARy) aimed at significantly improving ventriculostomy accuracy, particularly in resource-limited settings such as those in low- and middle-income countries is presented. iSurgARy uses landmark based registration by taking advantage of light detection and ranging to allow for accurate surgical guidance. To evaluate iSurgARy, a two-phase user study is conducted. Initially, the usability and learnability is assessed with novice participants using the system usability scale (SUS), incorporating their feedback to refine the application. In the second phase, human-computer interaction and clinical domain experts are engaged to evaluate this application, measuring root mean square error, SUS and NASA task load index metrics to assess accuracy usability, and cognitive workload, respectively.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11733309/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Knowledge distillation approach for skin cancer classification on lightweight deep learning model","authors":"Suman Saha, Md. Moniruzzaman Hemal, Md. Zunead Abedin Eidmum, Muhammad Firoz Mridha","doi":"10.1049/htl2.12120","DOIUrl":"10.1049/htl2.12120","url":null,"abstract":"<p>Over the past decade, there has been a global increase in the incidence of skin cancers. Skin cancer has serious consequences if left untreated, potentially leading to more advanced cancer stages. In recent years, deep learning based convolutional neural network have emerged as powerful tools for skin cancer detection. Generally, deep learning approaches are computationally expensive and require large storage space. Therefore, deploying such a large complex model on resource-constrained devices is challenging. An ultra-light and accurate deep learning model is highly desirable for better inference time and memory in low-power-consuming devices. Knowledge distillation is an approach for transferring knowledge from a large network to a small network. This small network is easily compatible with resource-constrained embedded devices while maintaining accuracy. The main aim of this study is to develop a deep learning-based lightweight network based on knowledge distillation that identifies the presence of skin cancer. Here, different training strategies are implemented for the modified benchmark (Phase 1) and custom-made model (Phase 2) and demonstrated various distillation configurations on two datasets: HAM10000 and ISIC2019. In Phase 1, the student model using knowledge distillation achieved accuracies ranging from 88.69% to 93.24% for HAM10000 and from 82.14% to 84.13% on ISIC2019. In Phase 2, the accuracies ranged from 88.63% to 88.89% on HAM10000 and from 81.39% to 83.42% on ISIC2019. These results highlight the effectiveness of knowledge distillation in improving the classification performance across diverse datasets and enabling the student model to approach the performance of the teacher model. In addition, the distilled student model can be easily deployed on resource-constrained devices for automated skin cancer detection due to its lower computational complexity.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11733311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongchao Shu, Mingxu Liu, Lalithkumar Seenivasan, Suxi Gu, Ping-Cheng Ku, Jonathan Knopf, Russell Taylor, Mathias Unberath
{"title":"Seamless augmented reality integration in arthroscopy: a pipeline for articular reconstruction and guidance","authors":"Hongchao Shu, Mingxu Liu, Lalithkumar Seenivasan, Suxi Gu, Ping-Cheng Ku, Jonathan Knopf, Russell Taylor, Mathias Unberath","doi":"10.1049/htl2.12119","DOIUrl":"10.1049/htl2.12119","url":null,"abstract":"<p>Arthroscopy is a minimally invasive surgical procedure used to diagnose and treat joint problems. The clinical workflow of arthroscopy typically involves inserting an arthroscope into the joint through a small incision, during which surgeons navigate and operate largely by relying on their visual assessment through the arthroscope. However, the arthroscope's restricted field of view and lack of depth perception pose challenges in navigating complex articular structures and achieving surgical precision during procedures. Aiming at enhancing intraoperative awareness, a robust pipeline that incorporates simultaneous localization and mapping, depth estimation, and 3D Gaussian splatting (3D GS) is presented to realistically reconstruct intra-articular structures solely based on monocular arthroscope video. Extending 3D reconstruction to augmented reality (AR) applications, the solution offers AR assistance for articular notch measurement and annotation anchoring in a human-in-the-loop manner. Compared to traditional structure-from-motion and neural radiance field-based methods, the pipeline achieves dense 3D reconstruction and competitive rendering fidelity with explicit 3D representation in 7 min on average. When evaluated on four phantom datasets, our method achieves root-mean-square-error <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mtext>(RMSE)</mtext>\u0000 <mo>=</mo>\u0000 <mn>2.21</mn>\u0000 <mspace></mspace>\u0000 <mtext>mm</mtext>\u0000 </mrow>\u0000 <annotation>$text{(RMSE)}=2.21 text{mm}$</annotation>\u0000 </semantics></math> reconstruction error, peak signal-to-noise ratio <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mtext>(PSNR)</mtext>\u0000 <mo>=</mo>\u0000 <mn>32.86</mn>\u0000 </mrow>\u0000 <annotation>$text{(PSNR)}=32.86$</annotation>\u0000 </semantics></math> and structure similarity index measure <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mtext>(SSIM)</mtext>\u0000 <mo>=</mo>\u0000 <mn>0.89</mn>\u0000 </mrow>\u0000 <annotation>$text{(SSIM)}=0.89$</annotation>\u0000 </semantics></math> on average. Because the pipeline enables AR reconstruction and guidance directly from monocular arthroscopy without any additional data and/or hardware, the solution may hold the potential for enhancing intraoperative awareness and facilitating surgical precision in arthroscopy. The AR measurement tool achieves accuracy within <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>1.59</mn>\u0000 <mo>±</mo>\u0000 <mn>1.81</mn>\u0000 <mspace></mspace>\u0000 <mtext>mm</mtext>\u0000 ","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730702/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nisha P. Shetty, Yashraj Singh, Veeraj Hegde, D. Cenitta, Dhruthi K
{"title":"Exploring emotional patterns in social media through NLP models to unravel mental health insights","authors":"Nisha P. Shetty, Yashraj Singh, Veeraj Hegde, D. Cenitta, Dhruthi K","doi":"10.1049/htl2.12096","DOIUrl":"10.1049/htl2.12096","url":null,"abstract":"<p>This study aimed to develop an advanced ensemble approach for automated classification of mental health disorders in social media posts. The research question was: can an ensemble of fine-tuned transformer models (XLNet, RoBERTa, and ELECTRA) with Bayesian hyperparameter optimization improve the accuracy of mental health disorder classification in social media text. Three transformer models (XLNet, RoBERTa, and ELECTRA) were fine-tuned on a dataset of social media posts labelled with 15 distinct mental health disorders. Bayesian optimization was employed for hyperparameter tuning, optimizing learning rate, number of epochs, gradient accumulation steps, and weight decay. A voting ensemble approach was then implemented to combine the predictions of the individual models. The proposed voting ensemble achieved the highest accuracy of 0.780, outperforming the individual models: XLNet (0.767), RoBERTa (0.775), and ELECTRA (0.755). The proposed ensemble approach, integrating XLNet, RoBERTa, and ELECTRA with Bayesian hyperparameter optimization, demonstrated improved accuracy in classifying mental health disorders from social media posts. This method shows promise for enhancing digital mental health research and potentially aiding in early detection and intervention strategies. Future work should focus on expanding the dataset, exploring additional ensemble techniques, and investigating the model's performance across different social media platforms and languages.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730989/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zijian Wu, Adam Schmidt, Peter Kazanzides, Septimiu E. Salcudean
{"title":"Augmenting efficient real-time surgical instrument segmentation in video with point tracking and Segment Anything","authors":"Zijian Wu, Adam Schmidt, Peter Kazanzides, Septimiu E. Salcudean","doi":"10.1049/htl2.12111","DOIUrl":"10.1049/htl2.12111","url":null,"abstract":"<p>The Segment Anything model (SAM) is a powerful vision foundation model that is revolutionizing the traditional paradigm of segmentation. Despite this, a reliance on prompting each frame and large computational cost limit its usage in robotically assisted surgery. Applications, such as augmented reality guidance, require little user intervention along with efficient inference to be usable clinically. This study addresses these limitations by adopting lightweight SAM variants to meet the efficiency requirement and employing fine-tuning techniques to enhance their generalization in surgical scenes. Recent advancements in tracking any point have shown promising results in both accuracy and efficiency, particularly when points are occluded or leave the field of view. Inspired by this progress, a novel framework is presented that combines an online point tracker with a lightweight SAM model that is fine-tuned for surgical instrument segmentation. Sparse points within the region of interest are tracked and used to prompt SAM throughout the video sequence, providing temporal consistency. The quantitative results surpass the state-of-the-art semi-supervised video object segmentation method XMem on the EndoVis 2015 dataset with 84.8 IoU and 91.0 Dice. The method achieves promising performance that is comparable to XMem and transformer-based fully supervised segmentation methods on ex vivo UCL dVRK and in vivo CholecSeg8k datasets. In addition, the proposed method shows promising zero-shot generalization ability on the label-free STIR dataset. In terms of efficiency, the method was tested on a single GeForce RTX 4060/4090 GPU respectively, achieving an over 25/90 FPS inference speed. Code is available at: https://github.com/zijianwu1231/SIS-PT-SAM.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730982/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marek Żelechowski, Jokin Zubizarreta-Oteiza, Murali Karnam, Balázs Faludi, Norbert Zentai, Nicolas Gerig, Georg Rauter, Florian M. Thieringer, Philippe C. Cattin
{"title":"Augmented reality navigation in orthognathic surgery: Comparative analysis and a paradigm shift","authors":"Marek Żelechowski, Jokin Zubizarreta-Oteiza, Murali Karnam, Balázs Faludi, Norbert Zentai, Nicolas Gerig, Georg Rauter, Florian M. Thieringer, Philippe C. Cattin","doi":"10.1049/htl2.12109","DOIUrl":"10.1049/htl2.12109","url":null,"abstract":"<p>The emergence of augmented reality (AR) in surgical procedures could significantly enhance accuracy and outcomes, particularly in the complex field of orthognathic surgery. This study compares the effectiveness and accuracy of traditional drilling guides with two AR-based navigation techniques: one utilizing ArUco markers and the other employing small-workspace infrared tracking cameras for a drilling task. Additionally, an alternative AR visualization paradigm for surgical navigation is proposed that eliminates the potential inaccuracies of image detection using headset cameras. Through a series of controlled experiments designed to assess the accuracy of hole placements in surgical scenarios, the performance of each method was evaluated both quantitatively and qualitatively. The findings reveal that the small-workspace infrared tracking camera system is on par with the accuracy of conventional drilling guides, hinting at a promising future where such guides could become obsolete. This technology demonstrates a substantial advantage by circumventing the common issues encountered with traditional tracking systems and surpassing the accuracy of ArUco marker-based navigation. These results underline the potential of this system for enabling more minimally invasive interventions, a crucial step towards enhancing surgical accuracy and, ultimately, patient outcomes. The study resulted in three relevant contributions: first, a new paradigm for AR visualization in the operating room, relying only on exact tracking information to navigate the surgeon is proposed. Second, the comparative analysis marks a critical step forward in the evolution of surgical navigation, paving the way for integrating more sophisticated AR solutions in orthognathic surgery and beyond. Finally, the system with a robotic arm is integrated and the inaccuracies present in a typical human-controlled system are evaluated.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730987/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martina Autelitano, Nadia Cattari, Marina Carbone, Fabrizio Cutolo, Nicola Montemurro, Emanuele Cigna, Vincenzo Ferrari
{"title":"Augmented reality for rhinoplasty: 3D scanning and projected AR for intraoperative planning validation","authors":"Martina Autelitano, Nadia Cattari, Marina Carbone, Fabrizio Cutolo, Nicola Montemurro, Emanuele Cigna, Vincenzo Ferrari","doi":"10.1049/htl2.12116","DOIUrl":"10.1049/htl2.12116","url":null,"abstract":"<p>Rhinoplasty is one of the major surgical procedures most popular and it is generally performed modelling the internal bones and cartilage using a closed approach to reduce the damage of soft tissue, whose final shape is determined by means of their new settlement over the internal remodelled rigid structures. An optimal planning, achievable thanks to advanced acquisition of 3D images and thanks to the virtual simulation of the intervention via specific software. Anyway, the final result depends also on factors that cannot be totally predicted regarding the settlement of soft tissues on the rigid structures, and a final objective check would be useful to eventually perform some adjustments before to conclude the intervention. The main idea of the present work is the using of 3D scan to acquire directly in the surgical room the final shape of the nose and to show the surgeon the differences respect to the planning in an intuitive way using augmented reality (AR) to show false colours directly over the patient face. This work motivates the selection of the devices integrated in our system, both from a technical and an ergonomic point of view, whose global error, evaluated on an anthropomorphic phantom, is lower than ± 1.2 mm with a confidence interval of 95%, while the mean error in detecting depth thickness variations is 0.182 mm.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730711/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feasibility of video-based skill assessment for percutaneous nephrostomy training in Senegal","authors":"Rebecca Hisey, Fatou Bintou Ndiaye, Kyle Sunderland, Idrissa Seck, Moustapha Mbaye, Mohammed Keita, Mamadou Diahame, Ron Kikinis, Babacar Diao, Gabor Fichtinger, Mamadou Camara","doi":"10.1049/htl2.12107","DOIUrl":"10.1049/htl2.12107","url":null,"abstract":"<p>Percutaneous nephrostomy can be an effective means of preventing irreparable renal damage from obstructive renal disease thereby providing patients with more time to access treatment to remove the source of the blockage. In sub-Saharan Africa, where there is limited access to treatments such as dialysis and transplantation, a nephrostomy can be life-saving. Training this procedure in simulation can allow trainees to develop their technical skills without risking patient safety, but still requires an ex-pert observer to provide performative feedback. In this study, the feasibility of using video as an accessible method to assess skill in simulated percutaneous nephrostomy is evaluated. Six novice urology residents and six expert urologists from Ouakam Military Hospital in Dakar, Senegal performed 4 nephrostomies each using the setup. Motion-based metrics were computed for each trial from the predicted bounding boxes of a trained object detection network, and these metrics were compared between novices and experts. The authors were able to measure significant differences in both ultrasound and needle handling between novice and expert participants. Additionally, performance changes could be measured within each group over multiple trials. Conclusions: Video-based skill assessment is a feasible and accessible option for providing trainees with quantitative performance feedback in sub-Saharan Africa.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"384-391"},"PeriodicalIF":2.8,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}