Healthcare Technology Letters最新文献

筛选
英文 中文
Knowledge distillation approach for skin cancer classification on lightweight deep learning model 基于轻量级深度学习模型的皮肤癌分类知识蒸馏方法。
IF 2.8
Healthcare Technology Letters Pub Date : 2025-01-15 DOI: 10.1049/htl2.12120
Suman Saha, Md. Moniruzzaman Hemal, Md. Zunead Abedin Eidmum, Muhammad Firoz Mridha
{"title":"Knowledge distillation approach for skin cancer classification on lightweight deep learning model","authors":"Suman Saha,&nbsp;Md. Moniruzzaman Hemal,&nbsp;Md. Zunead Abedin Eidmum,&nbsp;Muhammad Firoz Mridha","doi":"10.1049/htl2.12120","DOIUrl":"10.1049/htl2.12120","url":null,"abstract":"<p>Over the past decade, there has been a global increase in the incidence of skin cancers. Skin cancer has serious consequences if left untreated, potentially leading to more advanced cancer stages. In recent years, deep learning based convolutional neural network have emerged as powerful tools for skin cancer detection. Generally, deep learning approaches are computationally expensive and require large storage space. Therefore, deploying such a large complex model on resource-constrained devices is challenging. An ultra-light and accurate deep learning model is highly desirable for better inference time and memory in low-power-consuming devices. Knowledge distillation is an approach for transferring knowledge from a large network to a small network. This small network is easily compatible with resource-constrained embedded devices while maintaining accuracy. The main aim of this study is to develop a deep learning-based lightweight network based on knowledge distillation that identifies the presence of skin cancer. Here, different training strategies are implemented for the modified benchmark (Phase 1) and custom-made model (Phase 2) and demonstrated various distillation configurations on two datasets: HAM10000 and ISIC2019. In Phase 1, the student model using knowledge distillation achieved accuracies ranging from 88.69% to 93.24% for HAM10000 and from 82.14% to 84.13% on ISIC2019. In Phase 2, the accuracies ranged from 88.63% to 88.89% on HAM10000 and from 81.39% to 83.42% on ISIC2019. These results highlight the effectiveness of knowledge distillation in improving the classification performance across diverse datasets and enabling the student model to approach the performance of the teacher model. In addition, the distilled student model can be easily deployed on resource-constrained devices for automated skin cancer detection due to its lower computational complexity.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11733311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seamless augmented reality integration in arthroscopy: a pipeline for articular reconstruction and guidance 关节镜中的无缝增强现实集成:关节重建和引导的管道。
IF 2.8
Healthcare Technology Letters Pub Date : 2025-01-10 DOI: 10.1049/htl2.12119
Hongchao Shu, Mingxu Liu, Lalithkumar Seenivasan, Suxi Gu, Ping-Cheng Ku, Jonathan Knopf, Russell Taylor, Mathias Unberath
{"title":"Seamless augmented reality integration in arthroscopy: a pipeline for articular reconstruction and guidance","authors":"Hongchao Shu,&nbsp;Mingxu Liu,&nbsp;Lalithkumar Seenivasan,&nbsp;Suxi Gu,&nbsp;Ping-Cheng Ku,&nbsp;Jonathan Knopf,&nbsp;Russell Taylor,&nbsp;Mathias Unberath","doi":"10.1049/htl2.12119","DOIUrl":"10.1049/htl2.12119","url":null,"abstract":"&lt;p&gt;Arthroscopy is a minimally invasive surgical procedure used to diagnose and treat joint problems. The clinical workflow of arthroscopy typically involves inserting an arthroscope into the joint through a small incision, during which surgeons navigate and operate largely by relying on their visual assessment through the arthroscope. However, the arthroscope's restricted field of view and lack of depth perception pose challenges in navigating complex articular structures and achieving surgical precision during procedures. Aiming at enhancing intraoperative awareness, a robust pipeline that incorporates simultaneous localization and mapping, depth estimation, and 3D Gaussian splatting (3D GS) is presented to realistically reconstruct intra-articular structures solely based on monocular arthroscope video. Extending 3D reconstruction to augmented reality (AR) applications, the solution offers AR assistance for articular notch measurement and annotation anchoring in a human-in-the-loop manner. Compared to traditional structure-from-motion and neural radiance field-based methods, the pipeline achieves dense 3D reconstruction and competitive rendering fidelity with explicit 3D representation in 7 min on average. When evaluated on four phantom datasets, our method achieves root-mean-square-error &lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mtext&gt;(RMSE)&lt;/mtext&gt;\u0000 &lt;mo&gt;=&lt;/mo&gt;\u0000 &lt;mn&gt;2.21&lt;/mn&gt;\u0000 &lt;mspace&gt;&lt;/mspace&gt;\u0000 &lt;mtext&gt;mm&lt;/mtext&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;annotation&gt;$text{(RMSE)}=2.21 text{mm}$&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt; reconstruction error, peak signal-to-noise ratio &lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mtext&gt;(PSNR)&lt;/mtext&gt;\u0000 &lt;mo&gt;=&lt;/mo&gt;\u0000 &lt;mn&gt;32.86&lt;/mn&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;annotation&gt;$text{(PSNR)}=32.86$&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt; and structure similarity index measure &lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mtext&gt;(SSIM)&lt;/mtext&gt;\u0000 &lt;mo&gt;=&lt;/mo&gt;\u0000 &lt;mn&gt;0.89&lt;/mn&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;annotation&gt;$text{(SSIM)}=0.89$&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt; on average. Because the pipeline enables AR reconstruction and guidance directly from monocular arthroscopy without any additional data and/or hardware, the solution may hold the potential for enhancing intraoperative awareness and facilitating surgical precision in arthroscopy. The AR measurement tool achieves accuracy within &lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mn&gt;1.59&lt;/mn&gt;\u0000 &lt;mo&gt;±&lt;/mo&gt;\u0000 &lt;mn&gt;1.81&lt;/mn&gt;\u0000 &lt;mspace&gt;&lt;/mspace&gt;\u0000 &lt;mtext&gt;mm&lt;/mtext&gt;\u0000 ","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730702/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring emotional patterns in social media through NLP models to unravel mental health insights 通过NLP模型探索社交媒体中的情感模式,揭示心理健康见解。
IF 2.8
Healthcare Technology Letters Pub Date : 2025-01-09 DOI: 10.1049/htl2.12096
Nisha P. Shetty, Yashraj Singh, Veeraj Hegde, D. Cenitta, Dhruthi K
{"title":"Exploring emotional patterns in social media through NLP models to unravel mental health insights","authors":"Nisha P. Shetty,&nbsp;Yashraj Singh,&nbsp;Veeraj Hegde,&nbsp;D. Cenitta,&nbsp;Dhruthi K","doi":"10.1049/htl2.12096","DOIUrl":"10.1049/htl2.12096","url":null,"abstract":"<p>This study aimed to develop an advanced ensemble approach for automated classification of mental health disorders in social media posts. The research question was: can an ensemble of fine-tuned transformer models (XLNet, RoBERTa, and ELECTRA) with Bayesian hyperparameter optimization improve the accuracy of mental health disorder classification in social media text. Three transformer models (XLNet, RoBERTa, and ELECTRA) were fine-tuned on a dataset of social media posts labelled with 15 distinct mental health disorders. Bayesian optimization was employed for hyperparameter tuning, optimizing learning rate, number of epochs, gradient accumulation steps, and weight decay. A voting ensemble approach was then implemented to combine the predictions of the individual models. The proposed voting ensemble achieved the highest accuracy of 0.780, outperforming the individual models: XLNet (0.767), RoBERTa (0.775), and ELECTRA (0.755). The proposed ensemble approach, integrating XLNet, RoBERTa, and ELECTRA with Bayesian hyperparameter optimization, demonstrated improved accuracy in classifying mental health disorders from social media posts. This method shows promise for enhancing digital mental health research and potentially aiding in early detection and intervention strategies. Future work should focus on expanding the dataset, exploring additional ensemble techniques, and investigating the model's performance across different social media platforms and languages.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730989/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmenting efficient real-time surgical instrument segmentation in video with point tracking and Segment Anything 利用点跟踪和任意分割技术增强手术器械的视频实时分割。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-30 DOI: 10.1049/htl2.12111
Zijian Wu, Adam Schmidt, Peter Kazanzides, Septimiu E. Salcudean
{"title":"Augmenting efficient real-time surgical instrument segmentation in video with point tracking and Segment Anything","authors":"Zijian Wu,&nbsp;Adam Schmidt,&nbsp;Peter Kazanzides,&nbsp;Septimiu E. Salcudean","doi":"10.1049/htl2.12111","DOIUrl":"10.1049/htl2.12111","url":null,"abstract":"<p>The Segment Anything model (SAM) is a powerful vision foundation model that is revolutionizing the traditional paradigm of segmentation. Despite this, a reliance on prompting each frame and large computational cost limit its usage in robotically assisted surgery. Applications, such as augmented reality guidance, require little user intervention along with efficient inference to be usable clinically. This study addresses these limitations by adopting lightweight SAM variants to meet the efficiency requirement and employing fine-tuning techniques to enhance their generalization in surgical scenes. Recent advancements in tracking any point have shown promising results in both accuracy and efficiency, particularly when points are occluded or leave the field of view. Inspired by this progress, a novel framework is presented that combines an online point tracker with a lightweight SAM model that is fine-tuned for surgical instrument segmentation. Sparse points within the region of interest are tracked and used to prompt SAM throughout the video sequence, providing temporal consistency. The quantitative results surpass the state-of-the-art semi-supervised video object segmentation method XMem on the EndoVis 2015 dataset with 84.8 IoU and 91.0 Dice. The method achieves promising performance that is comparable to XMem and transformer-based fully supervised segmentation methods on ex vivo UCL dVRK and in vivo CholecSeg8k datasets. In addition, the proposed method shows promising zero-shot generalization ability on the label-free STIR dataset. In terms of efficiency, the method was tested on a single GeForce RTX 4060/4090 GPU respectively, achieving an over 25/90 FPS inference speed. Code is available at: https://github.com/zijianwu1231/SIS-PT-SAM.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730982/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented reality navigation in orthognathic surgery: Comparative analysis and a paradigm shift 增强现实导航在正颌手术:比较分析和范式转变。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-25 DOI: 10.1049/htl2.12109
Marek Żelechowski, Jokin Zubizarreta-Oteiza, Murali Karnam, Balázs Faludi, Norbert Zentai, Nicolas Gerig, Georg Rauter, Florian M. Thieringer, Philippe C. Cattin
{"title":"Augmented reality navigation in orthognathic surgery: Comparative analysis and a paradigm shift","authors":"Marek Żelechowski,&nbsp;Jokin Zubizarreta-Oteiza,&nbsp;Murali Karnam,&nbsp;Balázs Faludi,&nbsp;Norbert Zentai,&nbsp;Nicolas Gerig,&nbsp;Georg Rauter,&nbsp;Florian M. Thieringer,&nbsp;Philippe C. Cattin","doi":"10.1049/htl2.12109","DOIUrl":"10.1049/htl2.12109","url":null,"abstract":"<p>The emergence of augmented reality (AR) in surgical procedures could significantly enhance accuracy and outcomes, particularly in the complex field of orthognathic surgery. This study compares the effectiveness and accuracy of traditional drilling guides with two AR-based navigation techniques: one utilizing ArUco markers and the other employing small-workspace infrared tracking cameras for a drilling task. Additionally, an alternative AR visualization paradigm for surgical navigation is proposed that eliminates the potential inaccuracies of image detection using headset cameras. Through a series of controlled experiments designed to assess the accuracy of hole placements in surgical scenarios, the performance of each method was evaluated both quantitatively and qualitatively. The findings reveal that the small-workspace infrared tracking camera system is on par with the accuracy of conventional drilling guides, hinting at a promising future where such guides could become obsolete. This technology demonstrates a substantial advantage by circumventing the common issues encountered with traditional tracking systems and surpassing the accuracy of ArUco marker-based navigation. These results underline the potential of this system for enabling more minimally invasive interventions, a crucial step towards enhancing surgical accuracy and, ultimately, patient outcomes. The study resulted in three relevant contributions: first, a new paradigm for AR visualization in the operating room, relying only on exact tracking information to navigate the surgeon is proposed. Second, the comparative analysis marks a critical step forward in the evolution of surgical navigation, paving the way for integrating more sophisticated AR solutions in orthognathic surgery and beyond. Finally, the system with a robotic arm is integrated and the inaccuracies present in a typical human-controlled system are evaluated.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730987/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented reality for rhinoplasty: 3D scanning and projected AR for intraoperative planning validation 增强现实鼻整形术:3D扫描和投影AR术中计划验证。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-17 DOI: 10.1049/htl2.12116
Martina Autelitano, Nadia Cattari, Marina Carbone, Fabrizio Cutolo, Nicola Montemurro, Emanuele Cigna, Vincenzo Ferrari
{"title":"Augmented reality for rhinoplasty: 3D scanning and projected AR for intraoperative planning validation","authors":"Martina Autelitano,&nbsp;Nadia Cattari,&nbsp;Marina Carbone,&nbsp;Fabrizio Cutolo,&nbsp;Nicola Montemurro,&nbsp;Emanuele Cigna,&nbsp;Vincenzo Ferrari","doi":"10.1049/htl2.12116","DOIUrl":"10.1049/htl2.12116","url":null,"abstract":"<p>Rhinoplasty is one of the major surgical procedures most popular and it is generally performed modelling the internal bones and cartilage using a closed approach to reduce the damage of soft tissue, whose final shape is determined by means of their new settlement over the internal remodelled rigid structures. An optimal planning, achievable thanks to advanced acquisition of 3D images and thanks to the virtual simulation of the intervention via specific software. Anyway, the final result depends also on factors that cannot be totally predicted regarding the settlement of soft tissues on the rigid structures, and a final objective check would be useful to eventually perform some adjustments before to conclude the intervention. The main idea of the present work is the using of 3D scan to acquire directly in the surgical room the final shape of the nose and to show the surgeon the differences respect to the planning in an intuitive way using augmented reality (AR) to show false colours directly over the patient face. This work motivates the selection of the devices integrated in our system, both from a technical and an ergonomic point of view, whose global error, evaluated on an anthropomorphic phantom, is lower than ± 1.2 mm with a confidence interval of 95%, while the mean error in detecting depth thickness variations is 0.182 mm.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"12 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730711/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility of video-based skill assessment for percutaneous nephrostomy training in Senegal 塞内加尔经皮肾造口术培训视频技能评估的可行性。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-14 DOI: 10.1049/htl2.12107
Rebecca Hisey, Fatou Bintou Ndiaye, Kyle Sunderland, Idrissa Seck, Moustapha Mbaye, Mohammed Keita, Mamadou Diahame, Ron Kikinis, Babacar Diao, Gabor Fichtinger, Mamadou Camara
{"title":"Feasibility of video-based skill assessment for percutaneous nephrostomy training in Senegal","authors":"Rebecca Hisey,&nbsp;Fatou Bintou Ndiaye,&nbsp;Kyle Sunderland,&nbsp;Idrissa Seck,&nbsp;Moustapha Mbaye,&nbsp;Mohammed Keita,&nbsp;Mamadou Diahame,&nbsp;Ron Kikinis,&nbsp;Babacar Diao,&nbsp;Gabor Fichtinger,&nbsp;Mamadou Camara","doi":"10.1049/htl2.12107","DOIUrl":"10.1049/htl2.12107","url":null,"abstract":"<p>Percutaneous nephrostomy can be an effective means of preventing irreparable renal damage from obstructive renal disease thereby providing patients with more time to access treatment to remove the source of the blockage. In sub-Saharan Africa, where there is limited access to treatments such as dialysis and transplantation, a nephrostomy can be life-saving. Training this procedure in simulation can allow trainees to develop their technical skills without risking patient safety, but still requires an ex-pert observer to provide performative feedback. In this study, the feasibility of using video as an accessible method to assess skill in simulated percutaneous nephrostomy is evaluated. Six novice urology residents and six expert urologists from Ouakam Military Hospital in Dakar, Senegal performed 4 nephrostomies each using the setup. Motion-based metrics were computed for each trial from the predicted bounding boxes of a trained object detection network, and these metrics were compared between novices and experts. The authors were able to measure significant differences in both ultrasound and needle handling between novice and expert participants. Additionally, performance changes could be measured within each group over multiple trials. Conclusions: Video-based skill assessment is a feasible and accessible option for providing trainees with quantitative performance feedback in sub-Saharan Africa.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"384-391"},"PeriodicalIF":2.8,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PlutoNet: An efficient polyp segmentation network with modified partial decoder and decoder consistency training 具有改进的部分解码器和解码器一致性训练的高效息肉分割网络。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-13 DOI: 10.1049/htl2.12105
Tugberk Erol, Duygu Sarikaya
{"title":"PlutoNet: An efficient polyp segmentation network with modified partial decoder and decoder consistency training","authors":"Tugberk Erol,&nbsp;Duygu Sarikaya","doi":"10.1049/htl2.12105","DOIUrl":"10.1049/htl2.12105","url":null,"abstract":"<p>Deep learning models are used to minimize the number of polyps that goes unnoticed by the experts and to accurately segment the detected polyps during interventions. Although state-of-the-art models are proposed, it remains a challenge to define representations that are able to generalize well and that mediate between capturing low-level features and higher-level semantic details without being redundant. Another challenge with these models is that they are computation and memory intensive, which can pose a problem with real-time applications. To address these problems, PlutoNet is proposed for polyp segmentation which requires only 9 FLOPs and 2,626,537 parameters, less than 10% of the parameters required by its counterparts. With PlutoNet, a novel <i>decoder consistency training</i> approach is proposed that consists of a shared encoder, the <i>modified partial decoder</i>, which is a combination of the partial decoder and full-scale connections that capture salient features at different scales without redundancy, and the auxiliary decoder which focuses on higher-level semantic features. The <i>modified partial decoder</i> and the auxiliary decoder are trained with a combined loss to enforce consistency, which helps strengthen learned representations. Ablation studies and experiments are performed which show that PlutoNet performs significantly better than the state-of-the-art models, particularly on unseen datasets.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"365-373"},"PeriodicalIF":2.8,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665777/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural fields for 3D tracking of anatomy and surgical instruments in monocular laparoscopic video clips 单眼腹腔镜视频片段中用于解剖和手术器械三维跟踪的神经场。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-12 DOI: 10.1049/htl2.12113
Beerend G. A. Gerats, Jelmer M. Wolterink, Seb P. Mol, Ivo A. M. J. Broeders
{"title":"Neural fields for 3D tracking of anatomy and surgical instruments in monocular laparoscopic video clips","authors":"Beerend G. A. Gerats,&nbsp;Jelmer M. Wolterink,&nbsp;Seb P. Mol,&nbsp;Ivo A. M. J. Broeders","doi":"10.1049/htl2.12113","DOIUrl":"10.1049/htl2.12113","url":null,"abstract":"<p>Laparoscopic video tracking primarily focuses on two target types: surgical instruments and anatomy. The former could be used for skill assessment, while the latter is necessary for the projection of virtual overlays. Where instrument and anatomy tracking have often been considered two separate problems, in this article, a method is proposed for joint tracking of all structures simultaneously. Based on a single 2D monocular video clip, a neural field is trained to represent a continuous spatiotemporal scene, used to create 3D tracks of all surfaces visible in at least one frame. Due to the small size of instruments, they generally cover a small part of the image only, resulting in decreased tracking accuracy. Therefore, enhanced class weighting is proposed to improve the instrument tracks. The authors evaluate tracking on video clips from laparoscopic cholecystectomies, where they find mean tracking accuracies of 92.4% for anatomical structures and 87.4% for instruments. Additionally, the quality of depth maps obtained from the method's scene reconstructions is assessed. It is shown that these pseudo-depths have comparable quality to a state-of-the-art pre-trained depth estimator. On laparoscopic videos in the SCARED dataset, the method predicts depth with an MAE of 2.9 mm and a relative error of 9.2%. These results show the feasibility of using neural fields for monocular 3D reconstruction of laparoscopic scenes. Code is available via GitHub: https://github.com/Beerend/Surgical-OmniMotion.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"411-417"},"PeriodicalIF":2.8,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665779/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142886155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design, development and evaluation of registry software for upper limb disabilities 上肢残疾登记软件的设计、开发与评价。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-12-12 DOI: 10.1049/htl2.12115
Khadijeh Moulaei, Abbas Sheikhtaheri, AliAkbar Haghdoost, Mansour Shahabi Nezhad, Kambiz Bahaadinbeigy
{"title":"Design, development and evaluation of registry software for upper limb disabilities","authors":"Khadijeh Moulaei,&nbsp;Abbas Sheikhtaheri,&nbsp;AliAkbar Haghdoost,&nbsp;Mansour Shahabi Nezhad,&nbsp;Kambiz Bahaadinbeigy","doi":"10.1049/htl2.12115","DOIUrl":"10.1049/htl2.12115","url":null,"abstract":"<p>Upper limb disabilities, if not managed, controlled and treated, significantly affect the physical and mental condition, daily activities and quality of life. Registries can help control and manage and even treat these disabilities by collecting clinical-management data of upper limb disabilities. Therefore, the aim of this study is to design, develop and evaluate a registry system for upper limb disabilities in terms of usability. By having identified data elements in the exploratory phase, we developed our registry software using hypertext preprocessor (PHP) programming language in XAMPP software, version 8.1.10. The content and interface validity of the pre-final version were assessed by 13 experts in the field of medical informatics and health information management. The registry has capabilities to create user profiles, record patient history, clinical records, independence in daily activities, mental health, and treatment processes. It can also generate statistical reports. Participants evaluated the registry's usability as “good” across different dimensions. The registry can help understand upper limb disabilities, improve care, reduce costs and errors, determine incidence and prevalence, evaluate prevention and treatment, and support research and policymaking. The registry can serve as a model for designing registries for other body disabilities.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"496-503"},"PeriodicalIF":2.8,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665789/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信