Ximing Liao , Yin Wu , Nana Jiang , Jiaxing Sun , Wujian Xu , Shaoyong Gao , Jun Wang , Ting Li , Kun Wang , Qiang Li
{"title":"Automated detection of abnormal respiratory sound from electronic stethoscope and mobile phone using MobileNetV2","authors":"Ximing Liao , Yin Wu , Nana Jiang , Jiaxing Sun , Wujian Xu , Shaoyong Gao , Jun Wang , Ting Li , Kun Wang , Qiang Li","doi":"10.1016/j.bbe.2023.11.001","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.11.001","url":null,"abstract":"<div><p>Auscultation, a traditional clinical examination method using a stethoscope to quickly assess airway abnormalities, remains valuable due to its real-time, non-invasive, and easy-to-perform nature. Recent advancements in computerized respiratory sound analysis (CRSA) have provided a quantifiable approach for recording, editing, and comparing respiratory sounds, also enabling the training of artificial intelligence models to fully excavate the potential of auscultation. However, existing sound analysis models often require complex computations, leading to prolonged processing times and high calculation and memory requirements. Moreover, the limited diversity and scope of available databases limits reproducibility and robustness, mainly relying on small sample datasets primarily collected from Caucasians. In order to overcome these limitations, we developed a new Chinese adult respiratory sound database, LD-DF RSdb, using an electronic stethoscope and mobile phone. By enrolling 145 participants, 9,584 high quality recordings were collected, containing 6,435 normal sounds, 2,782 crackles, 208 wheezes, and 159 combined sounds. Subsequently, we utilized a lightweight neural network architecture, MobileNetV2, for automated categorization of the four types of respiratory sounds, achieving an appreciable overall performance with an AUC of 0.8923. This study demonstrates the feasibility and potential of using mobile phones, electronic stethoscopes, and MobileNetV2 in CRSA. The proposed method offers a convenient and promising approach to enhance overall respiratory disease management and may help address healthcare resource disparities.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":null,"pages":null},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0208521623000608/pdfft?md5=eb2d1ad12271a18266dc09d4d5b9b3c9&pid=1-s2.0-S0208521623000608-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138448149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yujie Feng , Chukwuemeka Clinton Atabansi , Jing Nie , Haijun Liu , Hang Zhou , Huai Zhao , Ruixia Hong , Fang Li , Xichuan Zhou
{"title":"Multi-stage fully convolutional network for precise prostate segmentation in ultrasound images","authors":"Yujie Feng , Chukwuemeka Clinton Atabansi , Jing Nie , Haijun Liu , Hang Zhou , Huai Zhao , Ruixia Hong , Fang Li , Xichuan Zhou","doi":"10.1016/j.bbe.2023.08.002","DOIUrl":"10.1016/j.bbe.2023.08.002","url":null,"abstract":"<div><p><span><span>Prostate cancer is one of the most commonly diagnosed non-cutaneous malignant tumors and the sixth major cause of cancer-related death generally found in men globally. Automatic segmentation of prostate regions has a wide range of applications in prostate cancer diagnosis and treatment. It is challenging to extract powerful spatial features for precise prostate </span>segmentation methods due to the wide variation in prostate size, shape, and histopathologic heterogeneity among patients. Most of the existing CNN-based architectures often produce unsatisfactory results and inaccurate boundaries in prostate segmentation, which are caused by inadequate discriminative feature maps and the limited amount of spatial information. To address these issues, we propose a novel </span>deep learning<span> technique called Multi-Stage FCN architecture for 2D prostate segmentation that captures more precise spatial information and accurate prostate boundaries. In addition, a new prostate ultrasound image dataset known as CCH-TRUSPS was collected from Chongqing University Cancer Hospital, including prostate ultrasound images of various prostate cancer architectures. We evaluate our method on the CCH-TRUSPS dataset and the publicly available Multi-site T2-weighted MRI dataset using five commonly used metrics for medical image analysis. When compared to other CNN-based methods on the CCH-TRUSPS test set, our Multi-Stage FCN achieves the highest and best binary accuracy of 99.15%, the DSC score of 94.90%, the IoU score of 89.80%, the precision of 94.67%, and the recall of 96.49%. The statistical and visual results demonstrate that our approach outperforms previous CNN-based techniques in all ramifications and can be used for the clinical diagnosis of prostate cancer.</span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":null,"pages":null},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43556776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanan Wu , Shouliang Qi , Jie Feng , Runsheng Chang , Haowen Pang , Jie Hou , Mengqi Li , Yingxi Wang , Shuyue Xia , Wei Qian
{"title":"Attention-guided multiple instance learning for COPD identification: To combine the intensity and morphology","authors":"Yanan Wu , Shouliang Qi , Jie Feng , Runsheng Chang , Haowen Pang , Jie Hou , Mengqi Li , Yingxi Wang , Shuyue Xia , Wei Qian","doi":"10.1016/j.bbe.2023.06.004","DOIUrl":"10.1016/j.bbe.2023.06.004","url":null,"abstract":"<div><p><span>Chronic obstructive pulmonary disease<span> (COPD) is a complex and multi-component respiratory disease. Computed tomography (CT) images can characterize lesions in COPD patients, but the image intensity and morphology of lung components have not been fully exploited. Two datasets (Dataset 1 and 2) comprising a total of 561 subjects were obtained from two centers. A multiple instance learning (MIL) method is proposed for COPD identification. First, randomly selected slices (instances) from CT scans and multi-view 2D snapshots of the 3D </span></span>airway tree<span><span> and lung field extracted from CT images are acquired. Then, three attention-guided MIL models (slice-CT, snapshot-airway, and snapshot-lung-field models) are trained. In these models, a deep convolution<span> neural network (CNN) is utilized for feature extraction. Finally, the outputs of the above three MIL models are combined using </span></span>logistic regression to produce the final prediction. For Dataset 1, the accuracy of the slice-CT MIL model with 20 instances was 88.1%. The backbone of VGG-16 outperformed Alexnet, Resnet18, Resnet26, and Mobilenet_v2 in feature extraction. The snapshot-airway and snapshot-lung-field MIL models achieved accuracies of 89.4% and 90.0%, respectively. After the three models were combined, the accuracy reached 95.8%. The proposed model outperformed several state-of-the-art methods and afforded an accuracy of 83.1% for the external dataset (Dataset 2). The proposed weakly supervised MIL method is feasible for COPD identification. The effective CNN module and attention-guided MIL pooling module contribute to performance enhancement. The morphology information of the airway and lung field is beneficial for identifying COPD.</span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":null,"pages":null},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42298880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient simultaneous segmentation and classification of brain tumors from MRI scans using deep learning","authors":"Akshya Kumar Sahoo , Priyadarsan Parida , K. Muralibabu , Sonali Dash","doi":"10.1016/j.bbe.2023.08.003","DOIUrl":"10.1016/j.bbe.2023.08.003","url":null,"abstract":"<div><p><span>Brain tumors can be difficult to diagnose, as they may have similar radiographic characteristics, and a thorough examination may take a considerable amount of time. To address these challenges, we propose an intelligent system for the automatic extraction and identification of brain tumors from 2D CE MRI images. Our approach comprises two stages. In the first stage, we use an encoder-decoder based U-net with residual network<span><span><span> as the backbone to detect different types of brain tumors, including glioma, meningioma, and </span>pituitary tumors. Our method achieved an accuracy of 99.60%, a sensitivity of 90.20%, a specificity of 99.80%, a </span>dice similarity coefficient of 90.11%, and a precision of 90.50% for tumor extraction. In the second stage, we employ a YOLO2 (you only look once) based </span></span>transfer learning<span> approach to classify the extracted tumors, achieving a classification accuracy of 97%. Our proposed approach outperforms state-of-the-art methods found in the literature. The results demonstrate the potential of our method to aid in the diagnosis and treatment of brain tumors.</span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":null,"pages":null},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45686531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haiying Xia , Yilin Qin , Yumei Tan , Shuxiang Song
{"title":"BA-Net: Brightness prior guided attention network for colonic polyp segmentation","authors":"Haiying Xia , Yilin Qin , Yumei Tan , Shuxiang Song","doi":"10.1016/j.bbe.2023.08.001","DOIUrl":"10.1016/j.bbe.2023.08.001","url":null,"abstract":"<div><p>Automatic polyp segmentation at colonoscopy plays an important role in the early diagnosis and surgery of colorectal cancer. However, the diversity of polyps in different images greatly increases the difficulty of accurately segmenting polyps. Manual segmentation of polyps in colonoscopic images is time-consuming and the rate of polyps missed remains high. In this paper, we propose a brightness prior guided attention network (BA-Net) for automatic polyp segmentation. Specifically, we first aggregate the high-level features of the last three layers of the encoder with an enhanced receptive field (ERF) module, which further fed to the decoder to obtain the initial prediction maps. Then, we introduce a brightness prior fusion (BF) module that fuses the brightness prior information into the multi-scale side-out high-level semantic features. The BF module aims to induce the network to localize salient regions, which may be potential polyps, to obtain better segmentation results. Finally, we propose a global reverse attention (GRA) module to combine the output of the BF module and the initial prediction map for obtaining long-range dependence and reverse refinement prediction results. With iterative refinement from higher-level semantics to lower-level semantics, our BA-Net can achieve more refined and accurate segmentation. Extensive experiments show that our BA-Net outperforms the state-of-the-art methods on six common polyp datasets.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":null,"pages":null},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44506676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naimahmed Nesaragi , Lars Øivind Høiseth , Hemin Ali Qadir , Leiv Arne Rosseland , Per Steinar Halvorsen , Ilangko Balasingham
{"title":"Non-invasive waveform analysis for emergency triage via simulated hemorrhage: An experimental study using novel dynamic lower body negative pressure model","authors":"Naimahmed Nesaragi , Lars Øivind Høiseth , Hemin Ali Qadir , Leiv Arne Rosseland , Per Steinar Halvorsen , Ilangko Balasingham","doi":"10.1016/j.bbe.2023.06.002","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.06.002","url":null,"abstract":"<div><p>The extent to which advanced waveform analysis of non-invasive physiological signals can diagnose levels of hypovolemia remains insufficiently explored. The present study explores the discriminative ability of a deep learning (DL) framework to classify levels of ongoing hypovolemia, simulated via novel dynamic lower body negative pressure (LBNP) model among healthy volunteers. We used a dynamic LBNP protocol as opposed to the traditional model, where LBNP is applied in a predictable step-wise, progressively descending manner. This dynamic LBNP version assists in circumventing the problem posed in terms of time dependency, as in real-life pre-hospital settings intravascular blood volume may fluctuate due to volume resuscitation. A supervised DL-based framework for ternary classification was realized by segmenting the underlying noninvasive signal and labeling segments with corresponding LBNP target levels. The proposed DL model with two inputs was trained with respective time–frequency representations extracted on waveform segments to classify each of them into blood volume loss: Class 1 (mild); Class 2 (moderate); or Class 3 (severe). At the outset, the latent space derived at the end of the DL model via late fusion among both inputs assists in enhanced classification performance. When evaluated in a 3-fold cross-validation setup with stratified subjects, the experimental findings demonstrated PPG to be a potential surrogate for variations in blood volume with average classification performance, AUROC: 0.8861, AUPRC: 0.8141, <span><math><mrow><mi>F</mi><mn>1</mn></mrow></math></span>-score:72.16%, Sensitivity:79.06%, and Specificity:89.21%. Our proposed DL algorithm on PPG signal demonstrates the possibility to capture the complex interplay in physiological responses related to both bleeding and fluid resuscitation using this challenging LBNP setup.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":null,"pages":null},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transformer-based cross-modal multi-contrast network for ophthalmic diseases diagnosis","authors":"Yang Yu, Hongqing Zhu","doi":"10.1016/j.bbe.2023.06.001","DOIUrl":"10.1016/j.bbe.2023.06.001","url":null,"abstract":"<div><p><span><span>Automatic diagnosis of various ophthalmic diseases from ocular medical images is vital to support clinical decisions. Most current methods employ a single </span>imaging modality<span>, especially 2D fundus images. Considering that the diagnosis of ophthalmic diseases can greatly benefit from multiple imaging modalities, this paper further improves the accuracy of diagnosis by effectively utilizing cross-modal data. In this paper, we propose Transformer-based cross-modal multi-contrast network for efficiently fusing color fundus photograph (CFP) and optical coherence tomography (OCT) modality to diagnose ophthalmic diseases. We design multi-contrast learning strategy to extract discriminate features from cross-modal data for diagnosis. Then channel fusion head captures the semantically shared information across different modalities and the similarity features between patients of the same category. Meanwhile, we use a class-balanced training strategy to cope with the situation that medical datasets are usually class-imbalanced. Our method is evaluated on public benchmark datasets for cross-modal ophthalmic disease diagnosis. The experimental results demonstrate that our method outperforms other approaches. The codes and models are available at </span></span><span>https://github.com/ecustyy/tcmn</span><svg><path></path></svg>.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":null,"pages":null},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43259056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md. Nahiduzzaman , Md Omaer Faruq Goni , Md. Robiul Islam , Abu Sayeed , Md. Shamim Anower , Mominul Ahsan , Julfikar Haider , Marcin Kowalski
{"title":"Detection of various lung diseases including COVID-19 using extreme learning machine algorithm based on the features extracted from a lightweight CNN architecture","authors":"Md. Nahiduzzaman , Md Omaer Faruq Goni , Md. Robiul Islam , Abu Sayeed , Md. Shamim Anower , Mominul Ahsan , Julfikar Haider , Marcin Kowalski","doi":"10.1016/j.bbe.2023.06.003","DOIUrl":"10.1016/j.bbe.2023.06.003","url":null,"abstract":"<div><p>Around the world, several lung diseases such as pneumonia, cardiomegaly, and tuberculosis (TB) contribute to severe illness, hospitalization or even death, particularly for elderly and medically vulnerable patients. In the last few decades, several new types of lung-related diseases have taken the lives of millions of people, and COVID-19 has taken almost 6.27 million lives. To fight against lung diseases, timely and correct diagnosis with appropriate treatment is crucial in the current COVID-19 pandemic. In this study, an intelligent recognition system for seven lung diseases has been proposed based on machine learning (ML) techniques to aid the medical experts. Chest X-ray (CXR) images of lung diseases were collected from several publicly available databases. A lightweight convolutional neural network (CNN) has been used to extract characteristic features from the raw pixel values of the CXR images. The best feature subset has been identified using the Pearson Correlation Coefficient (PCC). Finally, the extreme learning machine (ELM) has been used to perform the classification task to assist faster learning and reduced computational complexity. The proposed CNN-PCC-ELM model achieved an accuracy of 96.22% with an Area Under Curve (AUC) of 99.48% for eight class classification. The outcomes from the proposed model demonstrated better performance than the existing state-of-the-art (SOTA) models in the case of COVID-19, pneumonia, and tuberculosis detection in both binary and multiclass classifications. For eight class classification, the proposed model achieved precision, recall and fi-score and ROC are 100%, 99%, 100% and 99.99% respectively for COVID-19 detection demonstrating its robustness. Therefore, the proposed model has overshadowed the existing pioneering models to accurately differentiate COVID-19 from the other lung diseases that can assist the medical physicians in treating the patient effectively.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":null,"pages":null},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42255709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MDCF_Net: A Multi-dimensional hybrid network for liver and tumor segmentation from CT","authors":"Jian Jiang , Yanjun Peng , Qingfan Hou , Jiao Wang","doi":"10.1016/j.bbe.2023.04.004","DOIUrl":"10.1016/j.bbe.2023.04.004","url":null,"abstract":"<div><p><span><span>The segmentation of the liver and liver tumors is critical in the diagnosis of liver cancer, and the high mortality rate of liver cancer has made it one of the most popular areas for segmentation research. Some deep learning </span>segmentation methods outperformed traditional methods in terms of segmentation results. However, they are unable to obtain satisfactory segmentation results due to blurred original image boundaries, the presence of noise, very small lesion sites, and other factors. In this paper, we propose MDCF_Net, which has dual encoding branches composed of </span>CNN and CnnFormer and can fully utilize multi-dimensional image features. First, it extracts both intra-slice and inter-slice information and improves the accuracy of the network output by symmetrically using multi-dimensional fusion layers. In the meantime, we propose a novel feature map stacking approach that focuses on the correlation of adjacent channels of two feature maps, improving the network's ability to perceive 3D features. Furthermore, the two coding branches collaborate to obtain both texture and edge features, and the network segmentation performance is further improved. Extensive experiments were carried out on the public datasets LiTS to determine the optimal slice thickness for this task. The superiority of the segmentation performance of our proposed MDCF_Net was confirmed by comparison with other leading methods on two public datasets, the LiTS and the 3DIRCADb.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":null,"pages":null},"PeriodicalIF":6.4,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47898926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MohammadJavad Shariatzadeh , Ehsan Hadizadeh Hafshejani , Cameron J.Mitchell , Mu Chiao , Dana Grecov
{"title":"Predicting muscle fatigue during dynamic contractions using wavelet analysis of surface electromyography signal","authors":"MohammadJavad Shariatzadeh , Ehsan Hadizadeh Hafshejani , Cameron J.Mitchell , Mu Chiao , Dana Grecov","doi":"10.1016/j.bbe.2023.04.002","DOIUrl":"10.1016/j.bbe.2023.04.002","url":null,"abstract":"<div><p>Muscle fatigue is defined as a reduction in the capability of muscle to exert force or power. Although surface electromyography<span> (sEMG) signals during exercise have been used to assess muscle fatigue, analyzing the sEMG signal during dynamic contractions is difficult because of the many signal distorting factors such as electrode movements, and variations in muscle tissue conductivity. Besides the non-deterministic and non-stationary nature of sEMG in dynamic contractions, no fatigue indicator is available to predict the ability of a muscle to apply force based on the sEMG signal properties.</span></p><p>In this study, we designed and manufactured a novel wearable sensor<span><span> system with both sEMG electrodes and motion tracking sensors to monitor the dynamic muscle movements of human subjects. We detected the state of muscle fatigue using a new </span>wavelet analysis method to predict the maximum isometric force the subject can apply during dynamic contraction.</span></p><p>Our method of signal processing consists of four main steps. 1- Segmenting sEMG signals using motion tracking signals. 2- Determine the most suitable mother wavelet for discrete wavelet transformation (DWT) based on cross-correlation between wavelets and signals. 3- Deoinsing the sEMG using the DWT method. 4- Calculation of normalized energy in different decomposition levels<span> to predict maximal voluntary isometric contraction force as an indicator of muscle fatigue.</span></p><p>The monitoring system was tested on healthy adults doing biceps curl exercises, and the results of the wavelet decomposition method were compared to well-known muscle fatigue indices in the literature.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":null,"pages":null},"PeriodicalIF":6.4,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45374300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}