Biomedical Signal Processing and Control最新文献

筛选
英文 中文
PCA-F-SHCNNet: Principal Component Analysis-Fused-Shepard Convolutional Neural Networks for lung cancer detection and severity level classification
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-28 DOI: 10.1016/j.bspc.2025.107843
SK Altaf Hussain Basha , Pravin R. Kshirsagar , P Srinivasa Rao , Tan Kuan Tak , Dr. B. Sivaneasan
{"title":"PCA-F-SHCNNet: Principal Component Analysis-Fused-Shepard Convolutional Neural Networks for lung cancer detection and severity level classification","authors":"SK Altaf Hussain Basha ,&nbsp;Pravin R. Kshirsagar ,&nbsp;P Srinivasa Rao ,&nbsp;Tan Kuan Tak ,&nbsp;Dr. B. Sivaneasan","doi":"10.1016/j.bspc.2025.107843","DOIUrl":"10.1016/j.bspc.2025.107843","url":null,"abstract":"<div><div>Lung cancer is one of the leading causes of cancer-related deaths worldwide. Therefore, lung cancer early detection is important to reduce the serious stage by implementing better treatment plans. While chest X-rays are commonly used for lung cancer detection, they are often not sensitive enough to detect early-stage cancers, in previous researches. Hence, to improve detection as well as classify lung cancer severity level, an innovative scheme is developed in this research using the Principal Component Analysis-Fused-Shepard Convolutional Neural Networks (PCA-F-ShCNNet) model, which is obtained by the amalgamation of the Principal Component Analysis Network (PCANet) and Shepard Convolutional Neural Networks (ShCNN). First, the input Computed Tomography (CT) image is pre-processed by utilizing Adaptive Weiner Filtering (AWF) and then, the segmentation is performed using U-Net. Afterwards, lung nodule is identified by employing a grid-based scheme and then a process of feature extraction is performed. Finally, the detection of lung cancer is performed by PCA-F-ShCNNet, where the layers will be modified as well as classification of severity level is executed by employing the same PCA-F-ShCNNet. Additionally, the developed PCA-F-ShCNNet method achieved superior accuracy, F-measure, and precision of 91.566 %, 90.490 % and 92.598 %, when compared to other existing approaches, such as Convolutional Neural Network-based Ebola optimization search algorithm (CNN-EOSA), Wavelet Partial Hadamard Transform-based optimal Support Vector Machine (WPHT-OSVM), Cuckoo Search Optimization, CNN, Local Binary Pattern (CSO + CNN + LBP), multi-round transfer learning and modified Generative Adversarial Network (MTL‐MGAN), Improved Deep Neural Network (IDNN), and Grey Wolf Optimization Algorithm and Recurrent Neural Network (GWO + RNN).</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107843"},"PeriodicalIF":4.9,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supervised autoregressive eXogenous Networks with Fractional Grünwald–Letnikov finite differences: Tumor Evolution and Immune Responses under Therapeutic Influence fractals model
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-28 DOI: 10.1016/j.bspc.2025.107871
Hassan Raza , Muhammad Junaid Ali Asif Raja , Rikza Mubeen , Zaheer Masood , Muhammad Asif Zahoor Raja
{"title":"Supervised autoregressive eXogenous Networks with Fractional Grünwald–Letnikov finite differences: Tumor Evolution and Immune Responses under Therapeutic Influence fractals model","authors":"Hassan Raza ,&nbsp;Muhammad Junaid Ali Asif Raja ,&nbsp;Rikza Mubeen ,&nbsp;Zaheer Masood ,&nbsp;Muhammad Asif Zahoor Raja","doi":"10.1016/j.bspc.2025.107871","DOIUrl":"10.1016/j.bspc.2025.107871","url":null,"abstract":"<div><div>Modeling malignant disease with immune retorts under therapeutic influence using fractional calculus and recurrent time-delay neural networks is an innovative approach that combines mathematical modeling with machine learning techniques to model inherent complexity of tumor behavior and forecasting of accurate therapeutic dosing timeline. Fractional aspect captures the memory effect of multifaceted tumor cells growth and artificial intelligence predicts treatment methodology such as drug dosing and help doctors to develop more effective and targeted treatments. This study develops a highly reliable and precise application of artificial intelligence-based methodology that utilize the insights, derived from fractional calculus to predict the tumor immune response to treatment, including optimal timing and drug dosing strategies. The utilization of recurrent time-delay neural networks in modeling malignant disease emerges as a beacon of innovation and computational sophistication. Grünwald–Letnikov (GL) based fractional solver is used to generate the synthetic data set for training, validation and testing of the designed neural networks methodology. To ascertain the genuineness and performance of the designed framework, a rigorous error analysis of different cases was performed. The accuracy and performance of the framework are further achieved in term of mean square error, meticulously optimized through iterative learning, regression metrics, cross-correlation, autocorrelation and histogram analysis.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107871"},"PeriodicalIF":4.9,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive approach to enhance emotion recognition through advanced feature extraction and Attention
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-27 DOI: 10.1016/j.bspc.2025.107860
A. Vidhyasekar , J. Jaya , B. Paulchamy , A. Muthukumar
{"title":"A comprehensive approach to enhance emotion recognition through advanced feature extraction and Attention","authors":"A. Vidhyasekar ,&nbsp;J. Jaya ,&nbsp;B. Paulchamy ,&nbsp;A. Muthukumar","doi":"10.1016/j.bspc.2025.107860","DOIUrl":"10.1016/j.bspc.2025.107860","url":null,"abstract":"<div><div>Emotion recognition from speech signals plays a critical role in various domains such as mental health evaluation and human–computer interaction. Traditional approaches often struggle to capture the intricate features and temporal relationships inherent in speech data, particularly in noisy environments. To address these limitations, this study introduces a novel hybrid model, termed CGAM (Capsule Networks Gated Recurrent Units and Attention Mechanism), which integrates Capsule Networks (CapsNet), Gated Recurrent Units (GRU), and an Attention Mechanism for robust speech and emotion recognition. The CGAM model leverages the hierarchical structure of CapsNet to extract layered features, while GRUs capture temporal dependencies in the data. The embedded Attention Mechanism enhances the model’s ability to focus on salient features, improving its discriminative power. Using the RAVDESS Emotional Speech Audio Dataset, the CGAM model achieves an accuracy of 98%, surpassing state-of-the-art methods in terms of accuracy, precision, recall, and F1-score. Ablation studies further validate the contributions of each component. This research offers a promising approach to advancing speech and emotion recognition systems, particularly in real-world, noisy environments, and lays the foundation for future applications in emotionally intelligent systems.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107860"},"PeriodicalIF":4.9,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VNLU-Net: Visual Network with Lightweight Union-net for Acute Myeloid Leukemia Detection on Heterogeneous Dataset
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-27 DOI: 10.1016/j.bspc.2025.107840
Rabul Saikia , Roopam Deka , Anupam Sarma , Ngangbam Herojit Singh , Muhammad Attique Khan , Salam Shuleenda Devi
{"title":"VNLU-Net: Visual Network with Lightweight Union-net for Acute Myeloid Leukemia Detection on Heterogeneous Dataset","authors":"Rabul Saikia ,&nbsp;Roopam Deka ,&nbsp;Anupam Sarma ,&nbsp;Ngangbam Herojit Singh ,&nbsp;Muhammad Attique Khan ,&nbsp;Salam Shuleenda Devi","doi":"10.1016/j.bspc.2025.107840","DOIUrl":"10.1016/j.bspc.2025.107840","url":null,"abstract":"<div><div>Recent advancements in Artificial Intelligence (AI) and Deep Learning (DL) have shown promising results in Acute Myeloid Leukemia (AML) detection. However, challenges remain due to limited, annotated datasets and the need for precise architectures. This paper proposes VNLU-Net, a novel DL framework, by integrating the frozen VGG16 with a lightweight Union-net module. The Union-net module substitutes the last three convolutional layers along with fully connected layers of VGG16. In the framework, the frozen layers of VGG16 provide robust feature extraction, leveraging the pretrained weights. Subsequently, the Union-net refines these features with minimal parameters, enhancing model robustness and generalization. The proposed method achieves better performance, with 99.37% accuracy on the <em>BBCI_AML_2024</em> dataset and 99.71% on a heterogeneous dataset. Additionally, the qualitative analysis using Gradient-weighted Class Activation Mapping (Grad-CAM) establishes the efficacy of the model. Moreover, the comparative analysis signifies its superiority over the standard existing approaches in terms of accuracy, precision, recall, F1-score, and specificity.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107840"},"PeriodicalIF":4.9,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient framework based on large foundation model for cervical cytopathology whole slide image screening
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-27 DOI: 10.1016/j.bspc.2025.107859
Jialong Huang , Gaojie Li , Shichao Kan , Jianfeng Liu , Yixiong Liang
{"title":"An efficient framework based on large foundation model for cervical cytopathology whole slide image screening","authors":"Jialong Huang ,&nbsp;Gaojie Li ,&nbsp;Shichao Kan ,&nbsp;Jianfeng Liu ,&nbsp;Yixiong Liang","doi":"10.1016/j.bspc.2025.107859","DOIUrl":"10.1016/j.bspc.2025.107859","url":null,"abstract":"<div><div>Cervical cytopathology whole slide image (WSI) screening primarily relies on detection-based approaches, which are limited by the high cost and labor-intensive nature of detailed annotations. Multiple Instance Learning (MIL), a weakly supervised paradigm using only slide-level labels, offers a promising alternative. However, existing MIL methods often depend on frozen pretrained models or self-supervised learning for feature extraction, which are either ineffective or computationally inefficient. To address these challenges, we propose a novel and efficient framework for cervical cytopathology WSI classification that leverages unsupervised and weakly supervised learning to enhance patch-level feature extraction. Specially, To tackle the high computational cost of training, our method introduces a mean pooling (MP)-based strategy to filter out high-risk patches, reducing the number of patches based on the sparse and dispersed nature of abnormal cells in WSIs. Additionally, we employ parameter-efficient fine-tuning (PEFT), where only the additional linear layers are trained, to significantly reduce the number of trainable parameters. Extensive experiments on the CSD and FNAC 2019 datasets demonstrate that our method consistently enhances the performance of various MIL frameworks, achieves state-of-the-art (SOTA) results, and enables faster inference speeds. Notably, on the CSD dataset, our method achieves an 8.87% improvement in specificity compared to existing approaches while maintaining the same sensitivity level of 97.84%. The code and trained models are publicly available at <span><span>https://github.com/CVIU-CSU/TCT-InfoNCE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107859"},"PeriodicalIF":4.9,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FusionLungNet: Multi-scale fusion convolution with refinement network for lung CT image segmentation
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-27 DOI: 10.1016/j.bspc.2025.107858
Sadjad Rezvani , Mansoor Fateh , Yeganeh Jalali , Amirreza Fateh
{"title":"FusionLungNet: Multi-scale fusion convolution with refinement network for lung CT image segmentation","authors":"Sadjad Rezvani ,&nbsp;Mansoor Fateh ,&nbsp;Yeganeh Jalali ,&nbsp;Amirreza Fateh","doi":"10.1016/j.bspc.2025.107858","DOIUrl":"10.1016/j.bspc.2025.107858","url":null,"abstract":"<div><div>Early detection of lung cancer is crucial as it increases the chances of successful treatment. Automatic lung image segmentation assists doctors in identifying diseases such as lung cancer, COVID-19, and respiratory disorders. However, lung segmentation is challenging due to overlapping features like vascular and bronchial structures, along with pixel-level fusion of brightness, color, and texture. New lung segmentation methods face difficulties in identifying long-range relationships between image components, reliance on convolution operations that may not capture all critical features, and the complex structures of the lungs. Furthermore, semantic gaps between feature maps can hinder the integration of relevant information, reducing model accuracy. Skip connections can also limit the decoder’s access to complete information, resulting in partial information loss during encoding. To overcome these challenges, we propose a hybrid approach using the FusionLungNet network, which has a multi-level structure with key components, including the ResNet-50 encoder, Channel-wise Aggregation Attention (CAA) module, Multi-scale Feature Fusion (MFF) block, self refinement (SR) module, and multiple decoders. The refinement sub-network uses convolutional neural networks for image post-processing to improve quality. Our method employs a combination of loss functions, including SSIM, IOU, and focal loss, to optimize image reconstruction quality. We created and publicly released a new dataset for lung segmentation called LungSegDB, including 1800 CT images from the LIDC-IDRI dataset (dataset version 1) and 700 images from the Chest CT Cancer Images from Kaggle dataset (dataset version 2). Our method achieved an IOU score of 98.04, outperforming existing methods and demonstrating significant improvements in segmentation accuracy. Both the dataset and code are publicly available (<span><span>Dataset link</span><svg><path></path></svg></span>, <span><span>Code link</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107858"},"PeriodicalIF":4.9,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing early detection of sepsis with physiological variable interactions and temporal contrastive learning in critical care
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-27 DOI: 10.1016/j.bspc.2025.107827
Da Huang, Tao Tan, Yue Sun
{"title":"Advancing early detection of sepsis with physiological variable interactions and temporal contrastive learning in critical care","authors":"Da Huang,&nbsp;Tao Tan,&nbsp;Yue Sun","doi":"10.1016/j.bspc.2025.107827","DOIUrl":"10.1016/j.bspc.2025.107827","url":null,"abstract":"<div><div>Sepsis presents a critical challenge in Intensive Care Units (ICUs) due to its rapid onset and complex etiology, necessitating accurate and timely diagnosis to reduce mortality. However, existing methods often fail to capture the intricate interactions among physiological variables and lack mechanisms to enhance the discovery of frequency-domain patterns, which are crucial for detecting subtle and clinically significant signs of sepsis. To address these limitations, we propose a novel sepsis prediction framework that integrates a Variable Interaction Graph Neural Network (VIGNN) with a Temporal Contrastive Loss (TCL). First, we design VIGNN to effectively model the intricate relationships among physiological variables. Second, we introduce a frequency-masking augmentation strategy that selectively focuses on important frequency components, generating augmented views to emphasize critical frequency-domain features. Finally, we develop TCL to align the representations of frequency-enhanced and original views of the same sample while distinguishing them from other samples at multiple temporal scales. This mechanism forces our model to uncover meaningful frequency-domain patterns that complement time-domain features, enabling a richer and more robust representation. Experimental results on the Beth Israel Deaconess Medical Center dataset and Emory University Hospital dataset demonstrate that our framework achieves AUROC scores of 81.17% and 84.48%, respectively. These results represent improvements of 2.49% and 2.45% over state-of-the-art methods, enabling clinicians to deliver more timely and targeted interventions. The code is publicly available at <span><span>https://github.com/Hgnnhd/VIGNN-TCL-master</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107827"},"PeriodicalIF":4.9,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human embryo stage classification using an enhanced R(2 + 1)D model and dynamic programming with optimized datasets
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-26 DOI: 10.1016/j.bspc.2025.107841
Abbas Barhoun , Mohammad Ali Balafar , Amin Golzari Oskouei , Leila Sadeghi
{"title":"Human embryo stage classification using an enhanced R(2 + 1)D model and dynamic programming with optimized datasets","authors":"Abbas Barhoun ,&nbsp;Mohammad Ali Balafar ,&nbsp;Amin Golzari Oskouei ,&nbsp;Leila Sadeghi","doi":"10.1016/j.bspc.2025.107841","DOIUrl":"10.1016/j.bspc.2025.107841","url":null,"abstract":"<div><div>Infertility affects millions of couples worldwide, with Assisted Reproductive Technology (ART), particularly In Vitro Fertilization (IVF), offering hope for many. The success of IVF critically depends on accurately assessing embryo quality. Traditional assessment methods relying on subjective morphological criteria face significant limitations, underscoring the need for more reliable approaches. This study proposes an advanced model for embryo evaluation, leveraging a three-dimensional deep learning framework based on a refined ResNet-R(2 + 1)D architecture. The model incorporates Spatial-only Self-Attention (SSA) and Squeeze-and-Excitation (SE) blocks to enhance spatial and channel-wise feature extraction. Additionally, convolutional blocks (convB) are integrated before and after the network to align feature representations effectively. Dynamic programming with the Viterbi algorithm ensures biologically consistent predictions during post-processing. The model is trained on a balanced and meticulously pre-processed dataset of time-lapse microscopy images, addressing issues of data imbalance and quality. Experimental results demonstrate the proposed model’s exceptional performance, achieving an accuracy of 93.3 %—a 13.1 % improvement over the baseline R(2 + 1)D model trained on a balanced dataset. Compared to state-of-the-art methods, the proposed model demonstrates acceptable accuracy and scalability, effectively managing the classification of 15 embryo developmental stages. These findings highlight the significant potential of advanced deep learning techniques in improving embryo selection and enhancing IVF success rates.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107841"},"PeriodicalIF":4.9,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mental stress detection and performance enhancement using fNIRS and wrist vibrator biofeedback
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-26 DOI: 10.1016/j.bspc.2025.107877
Anita Beigzadeh , Vahid Yazdnian , Seyed Kamaledin Setarehdan
{"title":"Mental stress detection and performance enhancement using fNIRS and wrist vibrator biofeedback","authors":"Anita Beigzadeh ,&nbsp;Vahid Yazdnian ,&nbsp;Seyed Kamaledin Setarehdan","doi":"10.1016/j.bspc.2025.107877","DOIUrl":"10.1016/j.bspc.2025.107877","url":null,"abstract":"<div><div>Daily life activities frequently expose individuals to varying levels of mental stress, which can adversely affect their performance. Therefore, it is crucial to develop effective strategies for stress management and performance improvement. This paper presents a comprehensive, portable, and real-time biofeedback system aimed at improving individuals’ stress management capabilities, ultimately leading to enhanced mental task performance. The system consists of a real-time brain signal acquisition device, a wireless vibration biofeedback unit, and a software-based program for stress level classification. Notably, the system is designed to minimize the time delay by efficiently integrating all components. Various signal processing and feature extraction techniques combined with machine learning have been employed for online stress detection. The experimental results demonstrate an accuracy of 83% and a recall of 92% in detecting true levels of mental stress in the stress classification module. In addition, the complete biofeedback system is tested on 20 participants in a controlled experimental setup, revealing a 55% reduction in stress levels and a 24.5% improvement in task accuracy. These findings support the effectiveness of the proposed system in stress management and performance improvement, validating the core premises of stress reduction and performance improvement through reward-based learning.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107877"},"PeriodicalIF":4.9,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic colorectal cancer detection using machine learning and deep learning based on feature selection in histopathological images
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-03-26 DOI: 10.1016/j.bspc.2025.107866
Hawkar Haji Said Junaid , Fatemeh Daneshfar , Mahmud Abdulla Mohammad
{"title":"Automatic colorectal cancer detection using machine learning and deep learning based on feature selection in histopathological images","authors":"Hawkar Haji Said Junaid ,&nbsp;Fatemeh Daneshfar ,&nbsp;Mahmud Abdulla Mohammad","doi":"10.1016/j.bspc.2025.107866","DOIUrl":"10.1016/j.bspc.2025.107866","url":null,"abstract":"<div><div>Colorectal cancer (CRC) accounts for 10% of global cancer cases and is the third most prevalent type, with a significant increase anticipated in the coming years. This trend underscores the need for precise diagnostics, as effective treatment depends on accurate histopathological analysis of hematoxylin and eosin (H&amp;E) stained biopsies. However, manual evaluation of biopsies is labor-intensive and prone to errors due to staining variations and inconsistencies, complicating the work of pathologists. To address these challenges, advanced automated image analysis, incorporating deep learning (DL) and machine learning (ML) techniques, has substantially improved computer-aided diagnosis systems. This paper proposes a composite model that combines DL and ML to enhance the accuracy of CRC diagnosis. The model aims to increase diagnostic precision, reduce computational complexity, and prevent overfitting for reliable performance. It employs a cascaded design involving feature extraction with MobileNetV2 and DenseNet121 using transfer learning (TL), dataset balancing via the Synthetic Minority Over-sampling Technique (SMOTE), key feature selection through a Chi-square test, and classification by ML algorithms with hyperparameter tuning. The proposed model demonstrates superior performance on the Extended Bioimaging Histopathological Image Segmentation (EBHI-Seg) and multi-class datasets, achieving high accuracy, precision, recall, F1-score, and area under the curve (AUC), demonstrating that the suggested model is superior to other methods already in use<span><span><sup>1</sup></span></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107866"},"PeriodicalIF":4.9,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信