{"title":"Electroglottography-based speech content classification using stacked BiLSTM-FCN network for clinical applications","authors":"Srinidhi Kanagachalam, Deok-Hwan Kim","doi":"10.1016/j.csl.2025.101886","DOIUrl":null,"url":null,"abstract":"<div><div>In this study, we introduce a newer approach to classify the human speech contents based on Electroglottographic (EGG) signals. In general, identifying human speech using EGG signals is challenging and unaddressed, as human speech may contain pathology due to vocal cord damage. In this paper, we propose a deep learning-based approach called Stacked BiLSTM-FCN to identify the speech contents for both the healthy and pathological person. This deep learning-based technique integrates a recurrent neural network (RNN) that utilizes bidirectional long short-term memory (BiLSTM) with a convolutional network that uses a squeeze and excitation layer, learns features from the EGG signals and classifies them based on the learned features. Experiments on the existing Saarbruecken Voice Database (SVD) dataset containing healthy and pathological voices with different pitch levels showed an accuracy of 92.09% on the proposed model. Further evaluations prove the generalization performance and robustness of the proposed method for application in clinical laboratories to identify speech contents with different pathologies and varying accent types.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"96 ","pages":"Article 101886"},"PeriodicalIF":3.4000,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230825001111","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In this study, we introduce a newer approach to classify the human speech contents based on Electroglottographic (EGG) signals. In general, identifying human speech using EGG signals is challenging and unaddressed, as human speech may contain pathology due to vocal cord damage. In this paper, we propose a deep learning-based approach called Stacked BiLSTM-FCN to identify the speech contents for both the healthy and pathological person. This deep learning-based technique integrates a recurrent neural network (RNN) that utilizes bidirectional long short-term memory (BiLSTM) with a convolutional network that uses a squeeze and excitation layer, learns features from the EGG signals and classifies them based on the learned features. Experiments on the existing Saarbruecken Voice Database (SVD) dataset containing healthy and pathological voices with different pitch levels showed an accuracy of 92.09% on the proposed model. Further evaluations prove the generalization performance and robustness of the proposed method for application in clinical laboratories to identify speech contents with different pathologies and varying accent types.
期刊介绍:
Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language.
The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.