Computers in biology and medicine最新文献

筛选
英文 中文
Using 3D point cloud and graph-based neural networks to improve the estimation of pulmonary function tests from chest CT 利用三维点云和基于图的神经网络改进胸部 CT 肺功能测试的估算。
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2024-09-27 DOI: 10.1016/j.compbiomed.2024.109192
{"title":"Using 3D point cloud and graph-based neural networks to improve the estimation of pulmonary function tests from chest CT","authors":"","doi":"10.1016/j.compbiomed.2024.109192","DOIUrl":"10.1016/j.compbiomed.2024.109192","url":null,"abstract":"<div><div>Pulmonary function tests (PFTs) are important clinical metrics to measure the severity of interstitial lung disease for systemic sclerosis patients. However, PFTs cannot always be performed by spirometry if there is a risk of disease transmission or other contraindications. In addition, it is unclear how lung function is affected by changes in lung vessels. Therefore, convolution neural networks (CNNs) were previously proposed to estimate PFTs from chest CT scans (CNN-CT) and extracted vessels (CNN-Vessel). Due to GPU memory constraints, however, these networks used down-sampled images, which causes a loss of information on small vessels. Previous literature has indicated that detailed vessel information from CT scans can be helpful for PFT estimation. Therefore, this paper proposes to use a point cloud neural network (PNN-Vessel) and graph neural network (GNN-Vessel) to estimate PFTs from point cloud and graph-based representations of pulmonary vessel centerlines, respectively. After that, we combine different networks and perform multiple variable step-wise regression analysis to explore if vessel-based networks can contribute to the PFT estimation, in addition to CNN-CT. Results showed that both PNN-Vessel and GNN-Vessel outperformed CNN-Vessel, by 14% and 4%, respectively, when averaged across the intra-class correlation coefficient (ICC) scores of four PFTs metrics. In addition, compared to CNN-Vessel, PNN-Vessel used 30% of training time (1.1 h) and 7% parameters (2.1 M) and GNN-Vessel used only 7% training time (0.25 h) and 0.7% parameters (0.2 M). We combined CNN-CT, PNN-Vessel and GNN-Vessel with the weights obtained from multiple variable regression methods, which achieved the best PFT estimation accuracy (ICC of 0.748, 0.742, 0.836 and 0.835 for the four PFT measures respectively). The results verified that more detailed vessel information could provide further explanation for PFT estimation from anatomical imaging.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142343057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wfold: A new method for predicting RNA secondary structure with deep learning Wfold:利用深度学习预测 RNA 二级结构的新方法。
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2024-09-27 DOI: 10.1016/j.compbiomed.2024.109207
{"title":"Wfold: A new method for predicting RNA secondary structure with deep learning","authors":"","doi":"10.1016/j.compbiomed.2024.109207","DOIUrl":"10.1016/j.compbiomed.2024.109207","url":null,"abstract":"<div><div>Precise estimations of RNA secondary structures have the potential to reveal the various roles that non-coding RNAs play in regulating cellular activity. However, the mainstay of traditional RNA secondary structure prediction methods relies on thermos-dynamic models via free energy minimization, a laborious process that requires a lot of prior knowledge. Here, RNA secondary structure prediction using Wfold, an end-to-end deep learning-based approach, is suggested. Wfold is trained directly on annotated data and base-pairing criteria. It makes use of an image-like representation of RNA sequences, which an enhanced U-net incorporated with a transformer encoder can process effectively. Wfold eventually increases the accuracy of RNA secondary structure prediction by combining the benefits of self-attention mechanism's mining of long-range information with U-net's ability to gather local information. We compare Wfold's performance using RNA datasets that are within and across families. When trained and evaluated on different RNA families, it achieves a similar performance as the traditional methods, but dramatically outperforms the state-of-the-art methods on within-family datasets. Moreover, Wfold can also reliably forecast pseudoknots. The findings imply that Wfold may be useful for improving sequence alignment, functional annotations, and RNA structure modeling.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142343058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust and smooth Couinaud segmentation via anatomical structure-guided point-voxel network 通过解剖结构引导的点-体素网络进行稳健平滑的 Couinaud 分割。
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2024-09-27 DOI: 10.1016/j.compbiomed.2024.109202
{"title":"Robust and smooth Couinaud segmentation via anatomical structure-guided point-voxel network","authors":"","doi":"10.1016/j.compbiomed.2024.109202","DOIUrl":"10.1016/j.compbiomed.2024.109202","url":null,"abstract":"<div><div>Precise Couinaud segmentation from preoperative liver computed tomography (CT) is crucial for surgical planning and lesion examination. However, this task is challenging as it is defined based on vessel structures, and there is no intensity contrast between adjacent Couinaud segments in CT images. To solve this challenge, we design a multi-scale point-voxel fusion framework, which can more effectively model the spatial relationship of points and the semantic information of the image, producing robust and smooth Couinaud segmentations. Specifically, we first segment the liver and vessels from the CT image and generate 3D liver point clouds and voxel grids embedded with the vessel structure. Then, our method with two input-specific branches extracts complementary feature representations from points and voxels, respectively. The local attention module adaptively fuses features from the two branches at different scales to balance the contribution of different branches in learning more discriminative features. Furthermore, we propose a novel distance loss at the feature level to make the features in the segment more compact, thereby improving the certainty of segmentation between segments. Our experimental results on three public liver datasets demonstrate that our proposed method outperforms several state-of-the-art methods by large margins. Specifically, in out-of-distribution (OOD) testing of LiTS dataset, our method exceeded the voxel-based 3D UNet by approximately 20% in Dice score, and outperformed the point-based PointNet2Plus by approximately 8% in Dice score. Our code and manual annotations of the public datasets presented in this paper are available online: <span><span>https://github.com/xukun-zhang/Couinaud-Segmentation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142343054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on carotid artery plaque anomaly detection algorithm based on ultrasound images 基于超声图像的颈动脉斑块异常检测算法研究。
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2024-09-27 DOI: 10.1016/j.compbiomed.2024.109180
{"title":"Research on carotid artery plaque anomaly detection algorithm based on ultrasound images","authors":"","doi":"10.1016/j.compbiomed.2024.109180","DOIUrl":"10.1016/j.compbiomed.2024.109180","url":null,"abstract":"<div><div>Carotid artery plaque is a key factor in stroke and other cardiovascular diseases. Accurate detection and localization of carotid artery plaque are essential for early prevention and treatment of diseases. However, current carotid artery ultrasound image anomaly detection algorithms face several challenges, such as scarcity of anomaly data in carotid arteries and traditional convolutional neural networks (CNNs) overlooking long-distance dependencies in image processing. To address these issues, we propose an anomaly detection algorithm for carotid artery plaques based on ultrasound images. The algorithm innovatively introduces an anomaly sample pair generation method to increase dataset diversity. Moreover, it employs an improved adaptive recursive gating pyramid pooling module to extract image features. This module significantly enhances the model’s capacity for high-order spatial interactions and adaptive feature fusion, thereby greatly improving the neural network’s feature extraction ability. The algorithm uses a Sigmoid layer to map each pixel’s feature vector to a probability distribution between 0 and 1, and anomalies are detected through probability threshold binarization. Experimental results show that our algorithm’s AUROC index reached 90.7% on a carotid artery dataset, improving by 2.1% compared to the FPI method. This research is expected to provide robust support for the early prevention and treatment of cardiovascular diseases.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142343053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Drug-induced torsadogenicity prediction model: An explainable machine learning-driven quantitative structure-toxicity relationship approach 药物诱导致裂性预测模型:可解释的机器学习驱动的定量结构-毒性关系方法
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2024-09-26 DOI: 10.1016/j.compbiomed.2024.109209
{"title":"Drug-induced torsadogenicity prediction model: An explainable machine learning-driven quantitative structure-toxicity relationship approach","authors":"","doi":"10.1016/j.compbiomed.2024.109209","DOIUrl":"10.1016/j.compbiomed.2024.109209","url":null,"abstract":"<div><div>Drug-induced Torsade de Pointes (TdP), a life-threatening polymorphic ventricular tachyarrhythmia, emerges due to the cardiotoxic effects of pharmaceuticals. The need for precise mechanisms and clinical biomarkers to detect this adverse effect presents substantial challenges in drug safety assessment. In this study, we propose that analyzing the physicochemical properties of pharmaceuticals can provide valuable insights into their potential for torsadogenic cardiotoxicity. Our research centers on estimating TdP risk based on the molecular structure of drugs. We introduce a novel quantitative structure-toxicity relationship (QSTR) prediction model that leverages an <em>in silico</em> approach developed by adopting the 4R rule in laboratory animals. This approach eliminates the need for animal testing, saves time, and reduces cost. Our algorithm has successfully predicted the torsadogenic risks of various pharmaceutical compounds. To develop this model, we employed Support Vector Machine (SVM) and ensemble techniques, including Random Forest (RF), Extreme Gradient Boosting (XGBoost), and Categorical Boosting (CatBoost). We enhanced the model's predictive accuracy through a rigorous two-step feature selection process. Furthermore, we utilized the SHapley Additive exPlanations (SHAP) technique to explain the prediction of torsadogenic risk, particularly within the RF model. This study represents a significant step towards creating a robust QSTR model, which can serve as an early screening tool for assessing the torsadogenic potential of pharmaceutical candidates or existing drugs. By incorporating molecular structure-based insights, we aim to enhance drug safety evaluation and minimize the risks of drug-induced TdP, ultimately benefiting both patients and the pharmaceutical industry.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual learning in medical image analysis: A survey 医学图像分析中的持续学习:调查
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2024-09-26 DOI: 10.1016/j.compbiomed.2024.109206
{"title":"Continual learning in medical image analysis: A survey","authors":"","doi":"10.1016/j.compbiomed.2024.109206","DOIUrl":"10.1016/j.compbiomed.2024.109206","url":null,"abstract":"<div><div>In the dynamic realm of practical clinical scenarios, Continual Learning (CL) has gained increasing interest in medical image analysis due to its potential to address major challenges associated with data privacy, model adaptability, memory inefficiency, prediction robustness and detection accuracy. In general, the primary challenge in adapting and advancing CL remains catastrophic forgetting. Beyond this challenge, recent years have witnessed a growing body of work that expands our comprehension and application of continual learning in the medical domain, highlighting its practical significance and intricacy. In this paper, we present an in-depth and up-to-date review of the application of CL in medical image analysis. Our discussion delves into the strategies employed to address specific tasks within the medical domain, categorizing existing CL methods into three settings: Task-Incremental Learning, Class-Incremental Learning, and Domain-Incremental Learning. These settings are further subdivided based on representative learning strategies, allowing us to assess their strengths and weaknesses in the context of various medical scenarios. By establishing a correlation between each medical challenge and the corresponding insights provided by CL, we provide a comprehensive understanding of the potential impact of these techniques. To enhance the utility of our review, we provide an overview of the commonly used benchmark medical datasets and evaluation metrics in the field. Through a comprehensive comparison, we discuss promising future directions for the application of CL in medical image analysis. A comprehensive list of studies is being continuously updated at <span><span>https://github.com/xw1519/Continual-Learning-Medical-Adaptation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142323149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate detection and instance segmentation of unstained living adherent cells in differential interference contrast images 在微分干涉对比图像中准确检测和实例分割未染色的活体粘附细胞
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2024-09-26 DOI: 10.1016/j.compbiomed.2024.109151
{"title":"Accurate detection and instance segmentation of unstained living adherent cells in differential interference contrast images","authors":"","doi":"10.1016/j.compbiomed.2024.109151","DOIUrl":"10.1016/j.compbiomed.2024.109151","url":null,"abstract":"<div><div>Detecting and segmenting unstained living adherent cells in differential interference contrast (DIC) images is crucial in biomedical research, such as cell microinjection, cell tracking, cell activity characterization, and revealing cell phenotypic transition dynamics. We present a robust approach, starting with dataset transformation. We curated 520 pairs of DIC images, containing 12,198 HepG2 cells, with ground truth annotations. The original dataset was randomly split into training, validation, and test sets. Rotations were applied to images in the training set, creating an interim “<span><math><mi>α</mi></math></span> set.” Similar transformations formed “<span><math><mi>β</mi></math></span>” and “<span><math><mi>γ</mi></math></span> sets” for validation and test data. The <span><math><mi>α</mi></math></span> set trained a Mask R-CNN, while the <span><math><mi>β</mi></math></span> set produced predictions, subsequently filtered and categorized. A residual network (ResNet) classifier determined mask retention. The <span><math><mi>γ</mi></math></span> set underwent iterative processing, yielding final segmentation. Our method achieved a weighted average of 0.567 in <span><math><msubsup><mrow><mtext>average precision (AP)</mtext></mrow><mrow><mtext>0.75</mtext></mrow><mrow><mtext>bbox</mtext></mrow></msubsup></math></span> and 0.673 in <span><math><msubsup><mrow><mtext>AP</mtext></mrow><mrow><mtext>0.75</mtext></mrow><mrow><mtext>segm</mtext></mrow></msubsup></math></span>, both outperforming major algorithms for cell detection and segmentation. Visualization also revealed that our method excels in practicality, accurately capturing nearly every cell, a marked improvement over alternatives.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multimodal cross-transformer-based model to predict mild cognitive impairment using speech, language and vision 利用语音、语言和视觉预测轻度认知障碍的多模态交叉变换器模型
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2024-09-26 DOI: 10.1016/j.compbiomed.2024.109199
{"title":"A multimodal cross-transformer-based model to predict mild cognitive impairment using speech, language and vision","authors":"","doi":"10.1016/j.compbiomed.2024.109199","DOIUrl":"10.1016/j.compbiomed.2024.109199","url":null,"abstract":"<div><div>Mild Cognitive Impairment (MCI) is an early stage of memory loss or other cognitive ability loss in individuals who maintain the ability to independently perform most activities of daily living. It is considered a transitional stage between normal cognitive stage and more severe cognitive declines like dementia or Alzheimer’s. Based on the reports from the National Institute of Aging (NIA), people with MCI are at a greater risk of developing dementia, thus it is of great importance to detect MCI at the earliest possible to mitigate the transformation of MCI to Alzheimer’s and dementia. Recent studies have harnessed Artificial Intelligence (AI) to develop automated methods to predict and detect MCI. The majority of the existing research is based on unimodal data (e.g., only speech or prosody), but recent studies have shown that multimodality leads to a more accurate prediction of MCI. However, effectively exploiting different modalities is still a big challenge due to the lack of efficient fusion methods. This study proposes a robust fusion architecture utilizing an embedding-level fusion via a co-attention mechanism to leverage multimodal data for MCI prediction. This approach addresses the limitations of early and late fusion methods, which often fail to preserve inter-modal relationships. Our embedding-level fusion aims to capture complementary information across modalities, enhancing predictive accuracy. We used the I-CONECT dataset, where a large number of semi-structured conversations via internet/webcam between participants aged 75+ years old and interviewers were recorded. We introduce a multimodal speech-language-vision Deep Learning-based method to differentiate MCI from Normal Cognition (NC). Our proposed architecture includes co-attention blocks to fuse three different modalities at the embedding level to find the potential interactions between speech (audio), language (transcribed speech), and vision (facial videos) within the cross-Transformer layer. Experimental results demonstrate that our fusion method achieves an average AUC of 85.3% in detecting MCI from NC, significantly outperforming unimodal (60.9%) and bimodal (76.3%) baseline models. This superior performance highlights the effectiveness of our model in capturing and utilizing the complementary information from multiple modalities, offering a more accurate and reliable approach for MCI prediction.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SparseMorph: A weakly-supervised lightweight sparse transformer for mono- and multi-modal deformable image registration SparseMorph:用于单模态和多模态可变形图像配准的弱监督轻量级稀疏变换器
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2024-09-26 DOI: 10.1016/j.compbiomed.2024.109205
{"title":"SparseMorph: A weakly-supervised lightweight sparse transformer for mono- and multi-modal deformable image registration","authors":"","doi":"10.1016/j.compbiomed.2024.109205","DOIUrl":"10.1016/j.compbiomed.2024.109205","url":null,"abstract":"<div><h3>Purpose</h3><div>Deformable image registration (DIR) is crucial for improving the precision of clinical diagnosis. Recent Transformer-based DIR methods have shown promising performance by capturing long-range dependencies. Nevertheless, these methods still grapple with high computational complexity. This work aims to enhance the performance of DIR in both computational efficiency and registration accuracy.</div></div><div><h3>Methods</h3><div>We proposed a weakly-supervised lightweight Transformer model, named SparseMorph. To reduce computational complexity without compromising the representative feature capture ability, we designed a sparse multi-head self-attention (SMHA) mechanism. To accumulate representative features while preserving high computational efficiency, we constructed a multi-branch multi-layer perception (MMLP) module. Additionally, we developed an anatomically-constrained weakly-supervised strategy to guide the alignment of regions-of-interest in mono- and multi-modal images.</div></div><div><h3>Results</h3><div>We assessed SparseMorph in terms of registration accuracy and computational complexity.</div><div>Within the mono-modal brain datasets IXI and OASIS, our SparseMorph outperforms the state-of-the-art method TransMatch with improvements of 3.2 % and 2.9 % in DSC scores for MRI-to-CT registration tasks, respectively. Moreover, in the multi-modal cardiac dataset MMWHS, our SparseMorph shows DSC score improvements of 9.7 % and 11.4 % compared to TransMatch in MRI-to-CT and CT-to-MRI registration tasks, respectively. Notably, SparseMorph attains these performance advantages while utilizing 33.33 % of the parameters of TransMatch.</div></div><div><h3>Conclusions</h3><div>The proposed weakly-supervised deformable image registration model, SparseMorph, demonstrates efficiency in both mono- and multi-modal registration tasks, exhibiting superior performance compared to state-of-the-art algorithms, and establishing an effective DIR method for clinical applications.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of spatially dense adrenergic stimulation to rotor behaviour in simulated atrial sheets 空间致密肾上腺素能刺激对模拟心房片转子行为的影响
IF 7 2区 医学
Computers in biology and medicine Pub Date : 2024-09-26 DOI: 10.1016/j.compbiomed.2024.109195
{"title":"Effects of spatially dense adrenergic stimulation to rotor behaviour in simulated atrial sheets","authors":"","doi":"10.1016/j.compbiomed.2024.109195","DOIUrl":"10.1016/j.compbiomed.2024.109195","url":null,"abstract":"<div><div>Sympathetic hyperactivity via spatially dense adrenergic stimulation may create pro-arrhythmic substrates even without structural remodelling. However, the effect of sympathetic hyperactivity on arrhythmic activity, such as rotors, is unknown. Using simulations, we examined the effects of gradually increasing the spatial density of adrenergic stimulation (AS) in atrial sheets on rotors. We compared their characteristics against rotors hosted in atrial sheets with increasing spatial density of minimally conductive (MC) elements to simulate structural remodelling due to injury or disease. We generated rotors using an S1-S2 stimulation protocol. Then, we created phase maps to identify phase singularities and map their trajectory over time. We measured each rotor’s duration (s), angular speed (rad/s), and spatiotemporal organization. We demonstrated that atrial sheets with increased AS spatial densities could maintain rotors longer than with MC elements (2.6 ± 0.1 s vs. 1.5 ± 0.2 s, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>001</mn></mrow></math></span>). Moreover, rotors have higher angular speed (70 ± 7 rads/s vs. 60 ± 15 rads/s, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>05</mn></mrow></math></span>) and better spatiotemporal organization (0.56 ± 0.05 vs. 0.58 ± 0.18, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>05</mn></mrow></math></span>) in atrial sheets with less than 25% AS elements compared to MC elements. Our findings may help elucidate electrophysiological potential alterations in atrial substrates due to sympathetic hyperactivity, particularly among individuals with autonomic derangements caused by chronic distress.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信