Computers in biology and medicine最新文献

筛选
英文 中文
Joint high-resolution feature learning and vessel-shape aware convolutions for efficient vessel segmentation 联合高分辨率特征学习和血管形状感知卷积用于有效的血管分割
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2025-04-19 DOI: 10.1016/j.compbiomed.2025.109982
Xiang Zhang , Qiang Zhu , Tao Hu , Song Guo , Genqing Bian , Wei Dong , Rao Hong , Xia Ling Lin , Peng Wu , Meili Zhou , Qingsen Yan , Ghulam Mohi-ud-din , Chen Ai , Zhou Li
{"title":"Joint high-resolution feature learning and vessel-shape aware convolutions for efficient vessel segmentation","authors":"Xiang Zhang ,&nbsp;Qiang Zhu ,&nbsp;Tao Hu ,&nbsp;Song Guo ,&nbsp;Genqing Bian ,&nbsp;Wei Dong ,&nbsp;Rao Hong ,&nbsp;Xia Ling Lin ,&nbsp;Peng Wu ,&nbsp;Meili Zhou ,&nbsp;Qingsen Yan ,&nbsp;Ghulam Mohi-ud-din ,&nbsp;Chen Ai ,&nbsp;Zhou Li","doi":"10.1016/j.compbiomed.2025.109982","DOIUrl":"10.1016/j.compbiomed.2025.109982","url":null,"abstract":"<div><div>Clear imagery of retinal vessels is one of the critical shreds of evidence in specific disease diagnosis and evaluation, including sophisticated hierarchical topology and plentiful-and-intensive capillaries. In this work, we propose a new topology- and shape-aware model named Multi-branch Vessel-shaped Convolution Network (MVCN) to adaptively learn high-resolution representations from retinal vessel imagery and thereby capture high-quality topology and shape information thereon. Two steps are involved in our pipeline. The former step is proposed as Multiple High-resolution Ensemble Module (MHEM) to enhance high-resolution characteristics of retinal vessel imagery via fusing scale-invariant hierarchical topology thereof. The latter is a novel vessel-shaped convolution that captures the retinal vessel topology to emerge from unrelated fundus structures. Moreover, our MVCN of separating such topology from the fundus is a dynamical multiple sub-label generation via using epistemic uncertainty, instead of manually separating raw labels to distinguish definitive and uncertain vessels. Compared to other existing methods, our method achieves the most advanced <em>AUC</em> values of 98.31%, 98.80%, 98.83%, and 98.65%, and the most advanced <em>ACC</em> of 95.83%, 96.82%, 97.09%,and 96.66% in DRIVE, CHASE_DB1, STARE, and HRF datasets. We also employ correctness, completeness, and quality metrics to evaluate skeletal similarity. Our method’s evaluation metrics have doubled compared to previous methods, thereby demonstrating the effectiveness thereof.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"191 ","pages":"Article 109982"},"PeriodicalIF":7.0,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143847978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing robustness and generalization in microbiological few-shot detection through synthetic data generation and contrastive learning 通过合成数据生成和对比学习增强微生物少射检测的鲁棒性和泛化性
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2025-04-19 DOI: 10.1016/j.compbiomed.2025.110141
Nikolas Ebert , Didier Stricker , Oliver Wasenmüller
{"title":"Enhancing robustness and generalization in microbiological few-shot detection through synthetic data generation and contrastive learning","authors":"Nikolas Ebert ,&nbsp;Didier Stricker ,&nbsp;Oliver Wasenmüller","doi":"10.1016/j.compbiomed.2025.110141","DOIUrl":"10.1016/j.compbiomed.2025.110141","url":null,"abstract":"<div><div>In many medical and pharmaceutical processes, continuous hygiene monitoring is crucial, often involving the manual detection of microorganisms in agar dishes by qualified personnel. Although deep learning methods hold promise for automating this task, they frequently encounter a shortage of sufficient training data, a prevalent challenge in colony detection. To overcome this limitation, we propose a novel pipeline that combines generative data augmentation with few-shot detection. Our approach aims to significantly enhance detection performance, even with (very) limited training data. A main component of our method is a diffusion-based generator model that inpaints synthetic bacterial colonies onto real agar plate backgrounds. This data augmentation technique enhances the diversity of training data, allowing for effective model training with only 25 real images. Our method outperforms common training-techniques, demonstrating a +0.45 mAP improvement compared to training from scratch, and a +0.15 mAP advantage over the current SOTA in synthetic data augmentation. Additionally, we integrate a decoupled feature classification strategy, where class-agnostic detection is followed by lightweight classification via a feed-forward network, making it possible to detect and classify colonies with minimal examples. This approach achieves an AP<sup>50</sup> score of 0.7 in a few-shot scenario on the AGAR dataset. Our method also demonstrates robustness to various image corruptions, such as noise and blur, proving its applicability in real-world scenarios. By reducing the need for large labeled datasets, our pipeline offers a scalable, efficient solution for colony detection in hygiene monitoring and biomedical research, with potential for broader applications in fields where rapid detection of new colony types is required.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"191 ","pages":"Article 110141"},"PeriodicalIF":7.0,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstructed scar morphology in patient-specific computational heart models has limited impact on the identification of ablation targets through in-silico pace mapping 在患者特异性计算心脏模型中重建的疤痕形态对通过计算机起搏映射识别消融目标的影响有限
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2025-04-19 DOI: 10.1016/j.compbiomed.2025.110229
Fernando O. Campos , Pranav Bhagirath , Sofia Monaci , Zhong Chen , John Whitaker , Gernot Plank , Christopher Aldo Rinaldi , Martin J. Bishop
{"title":"Reconstructed scar morphology in patient-specific computational heart models has limited impact on the identification of ablation targets through in-silico pace mapping","authors":"Fernando O. Campos ,&nbsp;Pranav Bhagirath ,&nbsp;Sofia Monaci ,&nbsp;Zhong Chen ,&nbsp;John Whitaker ,&nbsp;Gernot Plank ,&nbsp;Christopher Aldo Rinaldi ,&nbsp;Martin J. Bishop","doi":"10.1016/j.compbiomed.2025.110229","DOIUrl":"10.1016/j.compbiomed.2025.110229","url":null,"abstract":"<div><h3>Background</h3><div>Patient-specific computational modeling for guiding ventricular tachycardia (VT) ablation often requires precise scar reconstruction to simulate reentrant circuits. However, this can be limited by the quality of scar imaging data. <em>In-silico</em> pace mapping, which simulates pacing rather than VT circuits, may offer a more robust approach to identifying ablation targets.</div></div><div><h3>Objective</h3><div>To investigate how the anatomical detail of scar reconstructions within computational image-based heart models influences the ability of <em>in-silico</em> pace mapping to identify VT origins.</div></div><div><h3>Methods</h3><div>VT was simulated in 15 patient-specific models reconstructed from high-resolution contrast-enhanced cardiac magnetic resonance (CMR). The obtained scar anatomy was then altered to mimic heart models constructed based on low-quality imaging and no-scar data. The ECG of each simulated VT was taken as input for the <em>in-silico</em> pace mapping approach, which involved pacing the heart at 1000 random sites surrounding the infarct. Correlations between the VT and paced ECGs were used to compute pace maps. The distance (d) between visually identified exit sites (ground truth) and pacing locations with the strongest correlation was used to assess accuracy of our <em>in-silico</em> approach.</div></div><div><h3>Results</h3><div>The performance of <em>in-silico</em> pace mapping was highest in high-resolution scar models (d = 7.3 ± 7.0 mm), but low-resolution and no-scar models still adequately located exit sites (d = 8.5 ± 6.5 mm and 13.3 ± 12.2 mm, respectively).</div></div><div><h3>Conclusion</h3><div><em>In-silico</em> pace mapping provides a reliable method for identifying VT ablation targets, showing relative insensitivity to scar reconstruction quality. This advantage may support its clinical translation over methods requiring explicit VT simulation.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"191 ","pages":"Article 110229"},"PeriodicalIF":7.0,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bilateral deformable attention transformer for screening of high myopia using optical coherence tomography 双侧可变形注意力转换器用于光学相干断层扫描筛查高度近视
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2025-04-19 DOI: 10.1016/j.compbiomed.2025.110236
Ruoxuan Gou , Xiao Ma , Na Su , Songtao Yuan , Qiang Chen
{"title":"Bilateral deformable attention transformer for screening of high myopia using optical coherence tomography","authors":"Ruoxuan Gou ,&nbsp;Xiao Ma ,&nbsp;Na Su ,&nbsp;Songtao Yuan ,&nbsp;Qiang Chen","doi":"10.1016/j.compbiomed.2025.110236","DOIUrl":"10.1016/j.compbiomed.2025.110236","url":null,"abstract":"<div><div>Myopia is a visual impairment caused by excessive refractive power of the cornea or lens or elongation of the eyeball. Due to the various classification criteria associated with high myopia, such as spherical equivalent (SE) and axial length (AL), existing methods primarily rely on individual classification criteria for model design. In this paper, to comprehensively utilize multiple indicators, we design a multi-label classification model for high myopia. Moreover, image data play a pivotal role in studying high myopia and pathological myopia. Notable features of high myopia, including increased retinal curvature, choroidal thinning, and scleral shadowing, are observable in Optical Coherence Tomography (OCT) images of the retina. We propose a model named Bilateral Deformable Attention Transformer (BDA-Tran) for multi-label screening of high myopia in OCT data. Based on the vision transformer, we introduce a bilateral deformable attention mechanism (BDA) where the queries in self-attention are composed of both the global queries and the data-dependent queries from the left and right sides. This flexible approach allows attention to focus on relevant regions and capture more myopia-related information features, thereby concentrating attention primarily on regions related to the choroid and sclera, among other areas associated with high myopia. BDA-Tran is trained and tested on OCT images of 243 patients, achieving the accuracies of 83.1 % and 87.7 % for SE and AL, respectively. Furthermore, we visualize attention maps to provide transparent and interpretable judgments. Experimental results demonstrate that BDA-Tran outperforms existing methods in terms of effectiveness and reliability under the same experimental conditions.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"191 ","pages":"Article 110236"},"PeriodicalIF":7.0,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated engineered-stone silicosis screening and staging using Deep Learning with X-rays 利用深度学习对 X 射线进行工程石矽肺病自动筛查和分期
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2025-04-18 DOI: 10.1016/j.compbiomed.2025.110153
Blanca Priego-Torres , Daniel Sanchez-Morillo , Ebrahim Khalili , Miguel Ángel Conde-Sánchez , Andrés García-Gámez , Antonio León-Jiménez
{"title":"Automated engineered-stone silicosis screening and staging using Deep Learning with X-rays","authors":"Blanca Priego-Torres ,&nbsp;Daniel Sanchez-Morillo ,&nbsp;Ebrahim Khalili ,&nbsp;Miguel Ángel Conde-Sánchez ,&nbsp;Andrés García-Gámez ,&nbsp;Antonio León-Jiménez","doi":"10.1016/j.compbiomed.2025.110153","DOIUrl":"10.1016/j.compbiomed.2025.110153","url":null,"abstract":"<div><div>Silicosis, a debilitating occupational lung disease caused by inhaling crystalline silica, continues to be a significant global health issue, especially with the increasing use of engineered stone (ES) surfaces containing high silica content. Traditional diagnostic methods, dependent on radiological interpretation, have low sensitivity, especially, in the early stages of the disease, and present variability between evaluators. This study explores the efficacy of deep learning techniques in automating the screening and staging of silicosis using chest X-ray images.</div><div>Utilizing a comprehensive dataset, obtained from the medical records of a cohort of workers exposed to artificial quartz conglomerates, we implemented a preprocessing stage for rib-cage segmentation, followed by classification using state-of-the-art deep learning models. The segmentation model exhibited high precision, ensuring accurate identification of thoracic structures. In the screening phase, our models achieved near-perfect accuracy, with ROC AUC values reaching 1.0, effectively distinguishing between healthy individuals and those with silicosis.</div><div>The models demonstrated remarkable precision in the staging of the disease. Nevertheless, differentiating between simple silicosis and progressive massive fibrosis, the evolved and complicated form of the disease, presented certain difficulties, especially during the transitional period, when assessment can be significantly subjective. Notwithstanding these difficulties, the models achieved an accuracy of around 81% and ROC AUC scores nearing 0.93.</div><div>This study highlights the potential of deep learning to generate clinical decision support tools to increase the accuracy and effectiveness in the diagnosis and staging of silicosis, whose early detection would allow the patient to be moved away from all sources of occupational exposure, therefore constituting a substantial advancement in occupational health diagnostics.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"191 ","pages":"Article 110153"},"PeriodicalIF":7.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143847967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BUS-M2AE: Multi-scale Masked Autoencoder for Breast Ultrasound Image Analysis BUS-M2AE:用于乳腺超声图像分析的多尺度掩码自编码器
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2025-04-18 DOI: 10.1016/j.compbiomed.2025.110159
Le Yu , Bo Gou , Xun Xia , Yujia Yang , Zhang Yi , Xiangde Min , Tao He
{"title":"BUS-M2AE: Multi-scale Masked Autoencoder for Breast Ultrasound Image Analysis","authors":"Le Yu ,&nbsp;Bo Gou ,&nbsp;Xun Xia ,&nbsp;Yujia Yang ,&nbsp;Zhang Yi ,&nbsp;Xiangde Min ,&nbsp;Tao He","doi":"10.1016/j.compbiomed.2025.110159","DOIUrl":"10.1016/j.compbiomed.2025.110159","url":null,"abstract":"<div><div>Masked AutoEncoder (MAE) has demonstrated significant potential in medical image analysis by reducing the cost of manual annotations. However, MAE and its recent variants are not well-developed for ultrasound images in breast cancer diagnosis, as they struggle to generalize to the task of distinguishing ultrasound breast tumors of varying sizes. This limitation hinders the model’s ability to adapt to the diverse morphological characteristics of breast tumors. In this paper, we propose a novel Breast UltraSound Multi-scale Masked AutoEncoder (BUS-M2AE) model to address the limitations of the general MAE. BUS-M2AE incorporates multi-scale masking methods at both the token level during the image patching stage and the feature level during the feature learning stage. These two multi-scale masking methods enable flexible strategies to match the explicit masked patches and the implicit features with varying tumor scales. By introducing these multi-scale masking methods in the image patching and feature learning phases, BUS-M2AE allows the pre-trained vision transformer to adaptively perceive and accurately distinguish breast tumors of different sizes, thereby improving the model’s overall performance in handling diverse tumor morphologies. Comprehensive experiments demonstrate that BUS-M2AE outperforms recent MAE variants and commonly used supervised learning methods in breast cancer classification and tumor segmentation tasks.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"191 ","pages":"Article 110159"},"PeriodicalIF":7.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards fast and reliable estimations of 3D pressure, velocity and wall shear stress in aortic blood flow: CFD-based machine learning approach 快速可靠地估计主动脉血流中的三维压力、速度和壁剪切应力:基于cfd的机器学习方法
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2025-04-18 DOI: 10.1016/j.compbiomed.2025.110137
Daiqi Lin, Saša Kenjereš
{"title":"Towards fast and reliable estimations of 3D pressure, velocity and wall shear stress in aortic blood flow: CFD-based machine learning approach","authors":"Daiqi Lin,&nbsp;Saša Kenjereš","doi":"10.1016/j.compbiomed.2025.110137","DOIUrl":"10.1016/j.compbiomed.2025.110137","url":null,"abstract":"<div><div>In this work, we developed deep neural networks for the fast and comprehensive estimation of the most salient features of aortic blood flow. These features include velocity magnitude and direction, 3D pressure, and wall shear stress. Starting from 40 subject-specific aortic geometries obtained from 4D Flow MRI, we applied statistical shape modeling to generate 1,000 synthetic aorta geometries. Complete computational fluid dynamics (CFD) simulations of these geometries were performed to obtain ground-truth values. We then trained deep neural networks for each characteristic flow feature using 900 randomly selected aorta geometries. Testing on remaining 100 geometries resulted in average errors of 3.11% for velocity and 4.48% for pressure. For wall shear stress predictions, we applied two approaches: (i) directly derived from the neural network-predicted velocity, and, (ii) predicted from a separate neural network. Both approaches yielded similar accuracy, with average error of 4.8 and 4.7% compared to complete 3D CFD results, respectively. We recommend the second approach for potential clinical use due to its significantly simplified workflow. In conclusion, this proof-of-concept analysis demonstrates the numerical robustness, rapid calculation speed (less than seconds), and good accuracy of the CFD-based machine learning approach in predicting velocity, pressure, and wall shear stress distributions in subject-specific aortic flows.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"191 ","pages":"Article 110137"},"PeriodicalIF":7.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EffiViT: Hybrid CNN-Transformer for Retinal Imaging EffiViT:混合cnn -变压器视网膜成像
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2025-04-17 DOI: 10.1016/j.compbiomed.2025.110164
Rajatha , D.V. Ashoka
{"title":"EffiViT: Hybrid CNN-Transformer for Retinal Imaging","authors":"Rajatha ,&nbsp;D.V. Ashoka","doi":"10.1016/j.compbiomed.2025.110164","DOIUrl":"10.1016/j.compbiomed.2025.110164","url":null,"abstract":"<div><div>The human eye is a vital sensory organ that is crucial for visual perception. The retina is the main component of the eye and is responsible for visual signals. Due to its characteristics, the retina can reveal the occurrence of ocular diseases. So, early detection and automated diagnosis of retinal disease are crucial for preventing both temporary and permanent blindness.</div><div>In the proposed work, a comprehensive framework is introduced, meticulously designed to leverage the synergic strengths of EfficientNet-B4 and Vision Transformers for attention-driven sophisticated analysis, offering a promising tool for advanced ophthalmic healthcare. This framework transcends the conventional hybridization by embedding the EfficientNetB4 reimagined as the multiscale feature encoder, creating discriminative feature maps preserving both local and intermediate contextual information. Then, Vision Transformer are incorporated to capitalize on the attention mechanisms to capture and model the global dependencies effectively. This combination establishes a sophisticated paradigm for capturing intricate patterns, focusing on the pertinent factors of the image, enabling precise and reliable classification.</div><div>It is seen that the proposed model achieved a significant advancement by scoring an AUC of 0.9466, mAP of 0.7865, F1-score of 0.75 and Model Score of 0.8665. The framework achieved a remarkable 5.17% increase in the overall score when compared to the previous cutting-edge technologies on the same task. This improvement underscores the effectiveness of the hybrid model in identifying both local and global contextual information, making it a robust and reliable tool for precise retinal diagnosis.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"191 ","pages":"Article 110164"},"PeriodicalIF":7.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention in surgical phase recognition for endoscopic pituitary surgery: Insights from real-world data 内镜下垂体手术中手术相位识别的注意:来自真实世界数据的见解
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2025-04-17 DOI: 10.1016/j.compbiomed.2025.110222
Ángela González-Cebrián , Sara Bordonaba , Javier Pascau , Igor Paredes , Alfonso Lagares , Paula de Toledo
{"title":"Attention in surgical phase recognition for endoscopic pituitary surgery: Insights from real-world data","authors":"Ángela González-Cebrián ,&nbsp;Sara Bordonaba ,&nbsp;Javier Pascau ,&nbsp;Igor Paredes ,&nbsp;Alfonso Lagares ,&nbsp;Paula de Toledo","doi":"10.1016/j.compbiomed.2025.110222","DOIUrl":"10.1016/j.compbiomed.2025.110222","url":null,"abstract":"<div><h3>Background and objective</h3><div>Surgical Phase Recognition systems are used to support the automated documentation of a procedure and to provide the surgical team with real-time feedback, potentially improving surgical outcome and reducing adverse events. The objective of this work is to develop a model for endoscopic pituitary surgery, a challenging procedure for phase recognition due to the high variability in the order of surgical phases.</div></div><div><h3>Methods</h3><div>A dataset of 69 pituitary endoscopic videos was collected and labelled by two surgeons in seven different phases. The architecture proposed comprises a Convolutional Neural Network to identify spatial features in individual frames, and a Segment Attentive Hierarchical Consistency Network (which combines Temporal Convolutional Networks with attention mechanisms) to learn temporal relationship information between frames and segments at different temporal scales. Finally, predictions are refined with an adaptative mode window.</div></div><div><h3>Results</h3><div>We have built and made publicly available the largest pituitary endoscopic surgery database to date, named PituPhase. We have built a model with a 73 % accuracy (75 % using a 10 s relaxed boundary). This result is comparable to other state-of-the-art methods in this surgical domain despite the challenges of the dataset (only 10 % of the videos are complete and only 3 % present all phases in the same order, versus 90 % and 50 % respectively in other studies).</div></div><div><h3>Conclusions</h3><div>Attention mechanisms in combination with Temporal Convolutional Networks and adaptive mode windows improve the performance of Surgical Phase Recognition systems and are robust to missing video sections and high variability in phase order.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"191 ","pages":"Article 110222"},"PeriodicalIF":7.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of atrial conduction velocity algorithms with error-prone clinical measurements for the identification of atrial fibrosis 易出错的心房传导速度算法在鉴别心房纤维化的临床测量中的表现
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2025-04-17 DOI: 10.1016/j.compbiomed.2025.110119
Ali Gharaviri , Vinush Vigneswaran , Keeran Vickneson , Caroline Roney , Cesare Corrado , Sam Coveney , Kestutis Maciunas , Neil Bodagh , Magda Klis , Irum Kotadia , Iain Sim , John Whitaker , Martin Bishop , Steven Niederer , Mark O'Neill , Steven E. Williams
{"title":"Performance of atrial conduction velocity algorithms with error-prone clinical measurements for the identification of atrial fibrosis","authors":"Ali Gharaviri ,&nbsp;Vinush Vigneswaran ,&nbsp;Keeran Vickneson ,&nbsp;Caroline Roney ,&nbsp;Cesare Corrado ,&nbsp;Sam Coveney ,&nbsp;Kestutis Maciunas ,&nbsp;Neil Bodagh ,&nbsp;Magda Klis ,&nbsp;Irum Kotadia ,&nbsp;Iain Sim ,&nbsp;John Whitaker ,&nbsp;Martin Bishop ,&nbsp;Steven Niederer ,&nbsp;Mark O'Neill ,&nbsp;Steven E. Williams","doi":"10.1016/j.compbiomed.2025.110119","DOIUrl":"10.1016/j.compbiomed.2025.110119","url":null,"abstract":"<div><h3>Introduction</h3><div>Measuring conduction velocity, as a direct consequence of fibrosis, may provide a better method to localise fibrotic regions. This study aims to assess established cardiac conduction velocity calculation methods (Triangulation, Polynomial Surface Fitting, and Radial Basis Function) in identifying areas of conduction slowing caused by fibrosis, considering realistic measurement errors.</div></div><div><h3>Method</h3><div>Using a human left atrium computational model, atrial activation was simulated. Each conduction velocity calculation method's performance was evaluated under uncertainties in mapping point density, local activation time assignment and electrode locations by comparing calculated conduction velocity to ground truth conduction velocity derived from high-resolution simulated atrial activation.</div></div><div><h3>Results</h3><div>All methods agreed well with ground truth conduction velocity maps in noise-free, high-density sampling conditions. However, Triangulation and Polynomial Surface Fitting methods showed susceptibility to noise, exhibiting significant errors under moderate to high noise levels. Radial Basis Function method demonstrated greater robustness to noise and reduced sampling density. Fibrotic region identification accuracy was high under ideal conditions for all methods but declined with increasing noise, with the Radial Basis Function method maintaining superior performance.</div></div><div><h3>Conclusion</h3><div>While all methods accurately estimate conduction velocity under ideal conditions, the Radial Basis Function method shows robustness to a realistic clinical noise, hence making it the most suitable to identify fibrotic regions.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"191 ","pages":"Article 110119"},"PeriodicalIF":7.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信