Computer methods and programs in biomedicine最新文献

筛选
英文 中文
GAN-based synthetic FDG PET images from T1 brain MRI can serve to improve performance of deep unsupervised anomaly detection models 基于gan的T1脑MRI合成FDG PET图像可以提高深度无监督异常检测模型的性能
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-03-31 DOI: 10.1016/j.cmpb.2025.108727
Daria Zotova , Nicolas Pinon , Robin Trombetta , Romain Bouet , Julien Jung , Carole Lartizien
{"title":"GAN-based synthetic FDG PET images from T1 brain MRI can serve to improve performance of deep unsupervised anomaly detection models","authors":"Daria Zotova ,&nbsp;Nicolas Pinon ,&nbsp;Robin Trombetta ,&nbsp;Romain Bouet ,&nbsp;Julien Jung ,&nbsp;Carole Lartizien","doi":"10.1016/j.cmpb.2025.108727","DOIUrl":"10.1016/j.cmpb.2025.108727","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Research in the cross-modal medical image translation domain has been very productive over the past few years in tackling the scarce availability of large curated multi-modality datasets with the promising performance of GAN-based architectures. However, only a few of these studies assessed task-based related performance of these synthetic data, especially for the training of deep models.</div></div><div><h3>Methods:</h3><div>We design and compare different GAN-based frameworks for generating synthetic brain[18F]fluorodeoxyglucose (FDG) PET images from T1 weighted MRI data. We first perform standard qualitative and quantitative visual quality evaluation. Then, we explore further impact of using these fake PET data in the training of a deep unsupervised anomaly detection (UAD) model designed to detect subtle epilepsy lesions in T1 MRI and FDG PET images. We introduce novel diagnostic task-oriented quality metrics of the synthetic FDG PET data tailored to our unsupervised detection task, then use these fake data to train a use case UAD model combining a deep representation learning based on siamese autoencoders with a OC-SVM density support estimation model. This model is trained on normal subjects only and allows the detection of any variation from the pattern of the normal population. We compare the detection performance of models trained on 35 paired real MR T1 of normal subjects paired either on 35 true PET images or on 35 synthetic PET images generated from the best performing generative models. Performance analysis is conducted on 17 exams of epilepsy patients undergoing surgery.</div></div><div><h3>Results:</h3><div>The best performing GAN-based models allow generating realistic fake PET images of control subject with SSIM and PSNR values around 0.9 and 23.8, respectively and <em>in distribution</em> (ID) with regard to the true control dataset. The best UAD model trained on these synthetic normative PET data allows reaching 74% sensitivity.</div></div><div><h3>Conclusion:</h3><div>Our results confirm that GAN-based models are the best suited for MR T1 to FDG PET translation, outperforming transformer or diffusion models. We also demonstrate the diagnostic value of these synthetic data for the training of UAD models and evaluation on clinical exams of epilepsy patients. Our code and the normative image dataset are available.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"265 ","pages":"Article 108727"},"PeriodicalIF":4.9,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143768583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards robust multimodal ultrasound classification for liver tumor diagnosis: A generative approach to modality missingness 对肝脏肿瘤诊断稳健的多模态超声分类:模态缺失的生成方法
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-03-30 DOI: 10.1016/j.cmpb.2025.108759
Jiali Guo , Rui Bu , Wanting Shen , Tao Feng
{"title":"Towards robust multimodal ultrasound classification for liver tumor diagnosis: A generative approach to modality missingness","authors":"Jiali Guo ,&nbsp;Rui Bu ,&nbsp;Wanting Shen ,&nbsp;Tao Feng","doi":"10.1016/j.cmpb.2025.108759","DOIUrl":"10.1016/j.cmpb.2025.108759","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Background and Objective&lt;/h3&gt;&lt;div&gt;In medical image analysis, combining multiple imaging modalities enhances diagnostic accuracy by providing complementary information. However, missing modalities are common in clinical settings, limiting the effectiveness of multimodal models. This study addresses the challenge of missing modalities in liver tumor diagnosis by proposing a generative model-based method for cross-modality reconstruction and classification. The dataset for this study comprises 359 case data from a hospital, with each case including three modality data: B-mode ultrasound images, Color Doppler Flow Imaging (CDFI), and clinical data. Only cases with one missing image modality are considered, excluding those with missing clinical data.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Methods&lt;/h3&gt;&lt;div&gt;We developed a multimodal classification framework specifically for liver tumor diagnosis, employing various feature extraction networks to explore the impact of different modality combinations on classification performance when only available modalities are used. DenseNet extracts CDFI features, while EfficientNet is employed for B-mode ultrasound image feature extraction. These features are then flattened and concatenated with clinical data using feature-level fusion to obtain a full-modality model. Modality weight parameters are introduced to emphasize the importance of different modalities, yielding Model_D, which serves as the classification model after subsequent image modality supplementation. In cases of missing modalities, generative models, including U-GAT-IT and MSA-GAN, are utilized for cross-modal reconstruction of missing B-mode ultrasound or CDFI images (e.g., reconstructing CDFI from B-mode ultrasound when CDFI is missing). After evaluating the usability of the generated images, they are input into Model_D as supplementary images for the missing modalities.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;div&gt;Model performance and modality supplementation effects were evaluated through accuracy, precision, recall, F1 score, and AUC metrics. The results demonstrate that the proposed Model_D, which introduces modality weights, achieves an accuracy of 88.57 %, precision of 87.97 %, recall of 82.32 %, F1 score of 0.87, and AUC of 0.95 in the full-modality classification task for liver tumors. Moreover, images reconstructed using U-GAT-IT and MSA-GAN across modalities exhibit PSNR &gt; 20 and multi-scale structural similarity &gt; 0.7, indicating moderate image quality with well-preserved overall structures, suitable for input into the model as supplementary images in cases of missing modalities. The supplementary CDFI or B-mode ultrasound images achieve 87.10 % and 86.43 % accuracy, respectively, with AUC values of 0.92 and 0.95. This proves that even in the absence of certain modalities, the generative models can effectively reconstruct missing images, maintaining high classification performance comparable to that in complete modality scenarios.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"265 ","pages":"Article 108759"},"PeriodicalIF":4.9,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143784073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting protein-protein interaction with interpretable bilinear attention network 用可解释双线性注意网络预测蛋白质-蛋白质相互作用
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-03-30 DOI: 10.1016/j.cmpb.2025.108756
Yong Han , Shao-Wu Zhang , Ming-Hui Shi , Qing-Qing Zhang , Yi Li , Xiaodong Cui
{"title":"Predicting protein-protein interaction with interpretable bilinear attention network","authors":"Yong Han ,&nbsp;Shao-Wu Zhang ,&nbsp;Ming-Hui Shi ,&nbsp;Qing-Qing Zhang ,&nbsp;Yi Li ,&nbsp;Xiaodong Cui","doi":"10.1016/j.cmpb.2025.108756","DOIUrl":"10.1016/j.cmpb.2025.108756","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Protein-protein interactions (PPIs) play the key roles in myriad biological processes, helping to understand the protein function and disease pathology. Identification of PPIs and their interaction types through wet experimental methods are costly and time-consuming. Therefore, some computational methods (e.g., sequence-based deep learning method) have been proposed to predict PPIs. However, these methods predominantly focus on protein sequence information, neglecting the protein structure information, while the protein structure is closely related to its function. In addition, current PPI prediction methods that introduce the protein structure information use independent encoders to learn the sequence and structure representations from protein sequences and structures, respectively, without explicitly learn the important local interaction representation of two proteins, making the prediction results hard to interpret.</div></div><div><h3>Methods</h3><div>Considering that current protein structure prediction methods (e.g., AlphaFold2) can accurately predict protein 3D structures and also provide a large number of protein 3D structures, here we present a novel end-to-end framework (called PPI-BAN) to predict PPIs and their interaction types by integrating protein sequence information and 3D structure information. PPI-BAN uses one-dimensional convolution operation (Conv1D) to extract the protein sequence features, employes GeomEtry-Aware Relational Graph Neural Network (GearNet) to learn protein 3D structure features, and adopts a deep bilinear attention network (BAN) to learn the joint features between one protein sequence and its 3D structure. The sequence features, structure features and joint features are concatenated to fed into a fully connected network for predicting PPIs and their interaction types.</div></div><div><h3>Results</h3><div>Experimental results show that PPI-BAN achieves the best overall performance against other state-of-the-art methods.</div></div><div><h3>Conclusions</h3><div>PPI-BAN can effectively predict PPIs and their interaction types, and identify the significant interaction sites by computing attention weight maps and mapping them to specific amino acid residues.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"265 ","pages":"Article 108756"},"PeriodicalIF":4.9,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143759526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combating Medical Label Noise through more precise partition-correction and progressive hard-enhanced learning 通过更精确的分割校正和渐进的强化学习来对抗医疗标签噪音
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-03-29 DOI: 10.1016/j.cmpb.2025.108734
Sanyan Zhang , Surong Chu , Yan Qiang , Juanjuan Zhao , Yan Wang , Xiao Wei
{"title":"Combating Medical Label Noise through more precise partition-correction and progressive hard-enhanced learning","authors":"Sanyan Zhang ,&nbsp;Surong Chu ,&nbsp;Yan Qiang ,&nbsp;Juanjuan Zhao ,&nbsp;Yan Wang ,&nbsp;Xiao Wei","doi":"10.1016/j.cmpb.2025.108734","DOIUrl":"10.1016/j.cmpb.2025.108734","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Computer-aided diagnosis systems based on deep neural networks heavily rely on datasets with high-quality labels. However, manual annotation for lesion diagnosis relies on image features, often requiring professional experience and complex image analysis process. This inevitably introduces noisy labels, which can misguide the training of classification models. Our goal is to design an effective method to address the challenges posed by label noise in medical images.</div></div><div><h3>Methods:</h3><div>we propose a novel noise-tolerant medical image classification framework consisting of two phases: fore-training correction and progressive hard-sample enhanced learning. In the first phase, we design a dual-branch sample partition detection scheme that effectively classifies each instance into one of three subsets: clean, hard, or noisy. Simultaneously, we propose a hard-sample label refinement strategy based on class prototypes with confidence-perception weighting and an effective joint correction method for noisy samples, enabling the acquisition of higher-quality training data. In the second phase, we design a progressive hard-sample reinforcement learning method to enhance the model’s ability to learn discriminative feature representations. This approach accounts for sample difficulty and mitigates the effects of label noise in medical datasets.</div></div><div><h3>Results:</h3><div>Our framework achieves an accuracy of 82.39% on the pneumoconiosis dataset collected by our laboratory. On a five-class skin disease dataset with six different levels of label noise (0, 0.05, 0.1, 0.2, 0.3, and 0.4), the average accuracy over the last ten epochs reaches 88.51%, 86.64%, 85.02%, 83.01%, 81.95%, 77.89%, respectively; For binary polyp classification under noise rates of 0.2, 0.3, and 0.4, the average accuracy over the last ten epochs is 97.90%, 93.77%, 89.33%, respectively.</div></div><div><h3>Conclusions:</h3><div>The effectiveness of our proposed framework is demonstrated through its performance on three challenging datasets with both real and synthetic noise. Experimental results further demonstrate the robustness of our method across varying noise rates.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"265 ","pages":"Article 108734"},"PeriodicalIF":4.9,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143740117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain age prediction based on brain region volume modeling under broad network field of view 大网络视场下基于脑区体积模型的脑年龄预测
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-03-29 DOI: 10.1016/j.cmpb.2025.108739
Jianjie Zheng , Junkai Wang , Zeyin Zhang , Kuncheng Li , Huimin Zhao , Peipeng Liang
{"title":"Brain age prediction based on brain region volume modeling under broad network field of view","authors":"Jianjie Zheng ,&nbsp;Junkai Wang ,&nbsp;Zeyin Zhang ,&nbsp;Kuncheng Li ,&nbsp;Huimin Zhao ,&nbsp;Peipeng Liang","doi":"10.1016/j.cmpb.2025.108739","DOIUrl":"10.1016/j.cmpb.2025.108739","url":null,"abstract":"<div><h3>Background and objective</h3><div>Brain region volume from Structural Magnetic Resonance Imaging (sMRI) can directly reflect abnormal states in brain aging. While promising for clinical brain health assessment, existing volume-based brain age prediction methods fail to explore both linear and nonlinear relationships, resulting in weak representation and suboptimal estimates.</div></div><div><h3>Methods</h3><div>This paper proposes a brain age prediction method, RFBLSO, based on Random Forest (RF), Broad Learning System (BLS), and Leave-One-Out Cross Validation (LOO). Firstly, RF is used to eliminate redundant brain regions with low correlation to the target value. The objective function is constructed by integrating feature nodes, enhancement nodes, and optimal regularization parameters. Subsequently, the pseudo-inverse method is employed to solve for the output coefficients, which facilitates a more accurate representation of the linear and nonlinear relationships between volume features and brain age.</div></div><div><h3>Results</h3><div>Across various datasets, RFBLSO demonstrates the capability to formulate brain age prediction models, achieving a Mean Absolute Error (MAE) of 4.60 years within the Healthy Group and 4.98 years within the Chinese2020 dataset. In the Clinical Group, RFBLSO achieves measurement and effective differentiation among Healthy Controls (HC), Mild Cognitive Impairment (MCI), and Alzheimer's disease (AD) (MAE for HC, MCI, and AD: 4.46 years, 8.77 years, 13.67 years; the effect size η2 of the analysis of variance for AD/MCI vs. HC is 0.23; the effect sizes of post-hoc tests are Cohen's <em>d</em> = 0.74 (AD vs. MCI), 1.50 (AD vs. HC), 0.77 (MCI vs. HC)). Compared to other linear or nonlinear brain age prediction methods, RFBLSO offers more accurate measurements and effectively distinguishes between Clinical Groups. This is because RFBLSO can simultaneously explore both linear and nonlinear relationships between brain region volume and brain age.</div></div><div><h3>Conclusion</h3><div>The proposed RFBLSO effectively represents both linear and nonlinear relationships between brain region volume and brain age, allowing for more accurate individual brain age estimation. This provides a feasible method for predicting the risk of neurodegenerative diseases.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"265 ","pages":"Article 108739"},"PeriodicalIF":4.9,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143759523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relationship Between the Elastic Modulus of the Novel Pedicle Screw-Plate System and Biomechanical Properties Under Osteoporotic Condition: A Power-Law Regression Analysis Based on Parametric Finite Element Simulations 骨质疏松条件下新型椎弓根螺钉-钢板系统弹性模量与生物力学性能的关系:基于参数化有限元模拟的幂律回归分析
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-03-29 DOI: 10.1016/j.cmpb.2025.108760
Kaibin Wang , Chongyi Wang , Haipeng Si , Yanwei Zhang , Shaowei Sang , Runtong Zhang , Wencan Zhang , Junfei Chen , Chen Liu , Kunpeng Li , Bingtao Hu , Xiangyu Lin , Yunze Feng , Qingyang Fu , Zhihao Kang , Mingyu Xu , Dingxin Zhang , Wanlong Xu , Le Li
{"title":"Relationship Between the Elastic Modulus of the Novel Pedicle Screw-Plate System and Biomechanical Properties Under Osteoporotic Condition: A Power-Law Regression Analysis Based on Parametric Finite Element Simulations","authors":"Kaibin Wang ,&nbsp;Chongyi Wang ,&nbsp;Haipeng Si ,&nbsp;Yanwei Zhang ,&nbsp;Shaowei Sang ,&nbsp;Runtong Zhang ,&nbsp;Wencan Zhang ,&nbsp;Junfei Chen ,&nbsp;Chen Liu ,&nbsp;Kunpeng Li ,&nbsp;Bingtao Hu ,&nbsp;Xiangyu Lin ,&nbsp;Yunze Feng ,&nbsp;Qingyang Fu ,&nbsp;Zhihao Kang ,&nbsp;Mingyu Xu ,&nbsp;Dingxin Zhang ,&nbsp;Wanlong Xu ,&nbsp;Le Li","doi":"10.1016/j.cmpb.2025.108760","DOIUrl":"10.1016/j.cmpb.2025.108760","url":null,"abstract":"<div><h3>Background and objective</h3><div>The novel pedicle screw-plate system (NPSPS) is a new internal fixation method for the thoracic spine that we proposed, which has demonstrated effectiveness through clinical practice and biomechanical testing. Nevertheless, the optimal elastic modulus of NPSPS (NPSPS-E) remains debated, particularly for osteoporosis patients. We propose a more efficient method to predict the biomechanical effects of NPSPS across varying elastic moduli in osteoporosis using parametric finite element (FE) analysis, establishing the regression relationship between NPSPS-E and biomechanical properties.</div></div><div><h3>Methods</h3><div>An FE surgical model of NPSPS under osteoporotic conditions was developed. The NPSPS-E was linearly varied from 3.6 GPa (polyether ether ketone) to 110 GPa (titanium alloy). Using power-law regression analysis, a functional equation was established to correlate NPSPS-E with biomechanical properties under osteoporotic condition.</div></div><div><h3>Results</h3><div>Power-law equations and regression models were successfully established between NPSPS-E and biomechanical prediction indices under osteoporotic condition (<em>P</em>&lt;0.0001). As NPSPS-E increased, the range of motion (ROM) of the T8-T10 spinal segments decreased from 0.51°-4.06° to 0.24°-1.45°. The mean von Mises stress in the T8-T10 vertebrae declined from 1.36 MPa-2.03 MPa to 1.15 MPa-1.79 MPa. Concurrently, the stress shielding ratios and the total stress ratios of the NPSPS increased from 3.66%-48.07% and 13.96%-26.96% to 10.70%-56.20% and 52.62%-64.40%, respectively.</div></div><div><h3>Conclusion</h3><div>The functional equations derived from these models serve as a predictive tool to directly estimate the biomechanical effects of NPSPS across a range of elastic modulus under osteoporotic conditions, thereby facilitating the design and optimization of NPSPS materials.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"265 ","pages":"Article 108760"},"PeriodicalIF":4.9,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143759525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging cluster analysis to compare click and chirp-evoked auditory brainstem responses 利用聚类分析比较点击和啾啾引起的听觉脑干反应
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-03-28 DOI: 10.1016/j.cmpb.2025.108732
Hasitha Wimalarathna , Patricia LeeAnn Youngblood , Caroline Parker , Charles G. Marx , Sangamanatha Ankmnal-Veeranna
{"title":"Leveraging cluster analysis to compare click and chirp-evoked auditory brainstem responses","authors":"Hasitha Wimalarathna ,&nbsp;Patricia LeeAnn Youngblood ,&nbsp;Caroline Parker ,&nbsp;Charles G. Marx ,&nbsp;Sangamanatha Ankmnal-Veeranna","doi":"10.1016/j.cmpb.2025.108732","DOIUrl":"10.1016/j.cmpb.2025.108732","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>The Auditory Brainstem Response (ABR) can be recorded by presenting short-duration click and chirp stimuli. The ABR test is commonly used for threshold estimation and to examine auditory brainstem integrity. The neural integrity is evaluated at suprathreshold levels. This study aimed to compare click and CE-Chirp®-evoked ABRs recorded at suprathreshold levels in normal-hearing infants and adults, using cluster analysis to identify patterns and distinctions between responses to the two stimuli.</div></div><div><h3>Methods:</h3><div>Click-evoked and CE-Chirp® evoked ABRs were recorded from infants and adults with normal hearing at suprathreshold levels. Cluster analysis techniques examined and categorized response patterns for each stimulus type, comparing across time, frequency and time–frequency domains.</div></div><div><h3>Results:</h3><div>Our findings indicate a noticeable homogeneity in the click-evoked ABRs in both groups in the time-domain, suggesting a consistent response to click stimuli. In contrast, CE-Chirp®-evoked ABRs exhibited variability in both groups, which may be attributable to the complex nature of the CE-Chirp® stimulus and its interaction with the auditory system.</div></div><div><h3>Conclusion:</h3><div>The implications of these findings are significant for audiologists. It is crucial to take into account the inherent variability of these responses when interpreting chirp-evoked ABRs, as they may reflect nuanced aspects of auditory system function that are not as prominent in the more uniform click-evoked ABRs. The insights from this study enhance our understanding of auditory brainstem processing and have the potential to refine the clinical protocols for ABR testing.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"265 ","pages":"Article 108732"},"PeriodicalIF":4.9,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143768581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient annotation bootstrapping for cell identification in follicular lymphoma 利用高效注释引导法识别滤泡性淋巴瘤细胞
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-03-27 DOI: 10.1016/j.cmpb.2025.108728
Adam Krawczyk , Aleksandra Osowska-Kurczab , Sławomir Pakuło , Wojciech Kotłowski , Zaneta Swiderska-Chadaj
{"title":"Efficient annotation bootstrapping for cell identification in follicular lymphoma","authors":"Adam Krawczyk ,&nbsp;Aleksandra Osowska-Kurczab ,&nbsp;Sławomir Pakuło ,&nbsp;Wojciech Kotłowski ,&nbsp;Zaneta Swiderska-Chadaj","doi":"10.1016/j.cmpb.2025.108728","DOIUrl":"10.1016/j.cmpb.2025.108728","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>In the medical field of digital pathology, many tasks rely on visual assessments of tissue patterns or cells, presenting an opportunity to apply computer vision methods. However, acquiring a substantial number of annotations for developing deep learning algorithms remains a bottleneck. The annotation process is inherently biased due to various constraints, including labor shortages, high costs, time inefficiencies, and a strongly imbalanced distribution of labels. This study explores available solutions for reducing the costs of annotation bootstrapping in the challenging task of follicular lymphoma diagnosis.</div></div><div><h3>Methods:</h3><div>We compare three distinct approaches to annotation bootstrapping: extensive manual annotations, active learning, and weak supervision. We propose a hybrid architecture for centroblast and centrocyte detection from whole slide images, based on a custom cell encoder and contextual encoding derived from foundation models for digital pathology. We collected a dataset of 41 whole slide images scanned with a 20x objective lens and resolution <span><math><mrow><mn>0</mn><mo>.</mo><mn>24</mn><mspace></mspace><mi>μ</mi></mrow></math></span>m/pixel, from which 12,704 cell annotations were gathered.</div></div><div><h3>Results:</h3><div>Applying our proposed active learning workflow led to an almost twofold increase in the number of samples within the minority class. The best bootstrapping method improved the overall performance of the detection algorithm by 18 percentage points, yielding a macro-averaged F1-score, precision, and recall of 63%.</div></div><div><h3>Conclusions:</h3><div>The results of this study may find applications in other digital pathology problems, particularly for tasks involving a lack of homogeneous cell clusters within whole slide images.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"265 ","pages":"Article 108728"},"PeriodicalIF":4.9,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143784056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breaking through scattering: The H-Net CNN model for image retrieval 突破散射:H-Net CNN图像检索模型
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-03-24 DOI: 10.1016/j.cmpb.2025.108723
Roger Chiu-Coutino , Miguel S. Soriano-Garcia , Carlos Israel Medel-Ruiz , S.M. Afanador-Delgado , Edgar Villafaña-Rauda , Roger Chiu
{"title":"Breaking through scattering: The H-Net CNN model for image retrieval","authors":"Roger Chiu-Coutino ,&nbsp;Miguel S. Soriano-Garcia ,&nbsp;Carlos Israel Medel-Ruiz ,&nbsp;S.M. Afanador-Delgado ,&nbsp;Edgar Villafaña-Rauda ,&nbsp;Roger Chiu","doi":"10.1016/j.cmpb.2025.108723","DOIUrl":"10.1016/j.cmpb.2025.108723","url":null,"abstract":"<div><h3>Background:</h3><div>In scattering media, traditional optical imaging techniques often find it significantly challenging to accurately reconstruct images owing to rapid light scattering. Thus, to address this problem, we propose a convolutional neural network architecture called H-Net, which is specifically designed to recover structural information from images distorted by scattering media.</div></div><div><h3>Method:</h3><div>Our approach involves the use of dilated convolutions to capture local and global features of the distorted images, allowing for the effective reconstruction of the underlying structures. First, we developed a diffuse image dataset by projecting handwritten numbers through diffusers with different thicknesses, capturing the resulting distorted images. Second, we generated a synthetic speckle images dataset, composed of simulated speckle patterns. These datasets were designed to train the model to recover structures within scattering media. To evaluate the model’s performance, we calculated the Structural Similarity Measure Index between the model’s predictions and the original images on unseen data.</div></div><div><h3>Result:</h3><div>This proposed architecture achieves reconstructions with an average structural similarity index measure of 0.8 while maintaining low computational costs.</div></div><div><h3>Conclusion:</h3><div>The results of this study indicate that H-Net offers an alternative to more complex and computationally expensive models, providing efficient and reliable image reconstruction in scattering media.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"265 ","pages":"Article 108723"},"PeriodicalIF":4.9,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion of multi-scale feature extraction and adaptive multi-channel graph neural network for 12-lead ECG classification 融合多尺度特征提取与自适应多通道图神经网络的12导联心电分类
IF 4.9 2区 医学
Computer methods and programs in biomedicine Pub Date : 2025-03-24 DOI: 10.1016/j.cmpb.2025.108725
Teng Chen , Yumei Ma , Zhenkuan Pan , Weining Wang , Jinpeng Yu
{"title":"Fusion of multi-scale feature extraction and adaptive multi-channel graph neural network for 12-lead ECG classification","authors":"Teng Chen ,&nbsp;Yumei Ma ,&nbsp;Zhenkuan Pan ,&nbsp;Weining Wang ,&nbsp;Jinpeng Yu","doi":"10.1016/j.cmpb.2025.108725","DOIUrl":"10.1016/j.cmpb.2025.108725","url":null,"abstract":"<div><h3>Background and objective:</h3><div>The 12-lead electrocardiography (ECG) is a widely used diagnostic method in clinical practice for cardiovascular diseases. The potential correlation between interlead signals is an important reference for clinical diagnosis but is often overlooked by most deep learning methods. Although graph neural networks can capture the associations between leads through edge topology, the complex correlations inherent in 12-lead ECG may involve edge topology, node features, or their combination.</div></div><div><h3>Methods:</h3><div>In this study, we propose a multi-scale adaptive graph fusion network (MSAGFN) model, which fuses multi-scale feature extraction and adaptive multi-channel graph neural network (AMGNN) for 12-lead ECG classification. The proposed MSAGFN model first extracts multi-scale features individually from 12 leads and then utilizes these features as nodes to construct feature graphs and topology graphs. To efficiently capture the most correlated information from the feature graphs and topology graphs, AMGNN iteratively performs a series of graph operations to learn the final graph-level representations for prediction. Moreover, we incorporate consistency and disparity constraints into our model to further refine the learned features.</div></div><div><h3>Results:</h3><div>Our model was validated on the PTB-XL dataset, achieving an area under the receiver operating characteristic curve score of 0.937, mean accuracy of 0.894, and maximum F1 score of 0.815. These results surpass the corresponding metrics of state-of-the-art methods. Additionally, we conducted ablation studies to further demonstrate the effectiveness of our model.</div></div><div><h3>Conclusions:</h3><div>Our study demonstrates that, in 12-lead ECG classification, by constructing topology graphs based on physiological relationships and feature graphs based on lead feature relationships, and effectively integrating them, we can fully explore and utilize the complementary characteristics of the two graph structures. By combining these structures, we construct a comprehensive data view, significantly enhancing the feature representation and classification accuracy.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"265 ","pages":"Article 108725"},"PeriodicalIF":4.9,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143759524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信