Mariya Celin T.A. , Vijayalakshmi P. , Nagarajan T. , Mrinalini K.
{"title":"Augmentative and alternative speech communication (AASC) aid for people with dysarthria","authors":"Mariya Celin T.A. , Vijayalakshmi P. , Nagarajan T. , Mrinalini K.","doi":"10.1016/j.csl.2025.101777","DOIUrl":null,"url":null,"abstract":"<div><div>Speech assistive aids are designed to enhance the intelligibility of speech, particularly for individuals with speech impairments such as dysarthria, by utilizing speech recognition and speech synthesis systems. The development of these devices promote independence and employability for dysarthric individuals facilitating their natural communication. However, the availability of speech assistive aids is limited due to various challenges, including the necessity to train a dysarthric speech recognition system tailored to the errors of dysarthric speakers, the portability required for use by any dysarthric individual with motor disorders, the need to sustain an adequate speech communication rate, and the financial implications associated with the development of such aids. To address this, in the current work a portable, affordable, and a personalized augmentative and alternative speech communication aid tailored to each dysarthric speaker’s need is developed. The dysarthric speech recognition system used in this aid is trained using a transfer learning approach, with normal speaker’s speech data as the source model and the target model includes the augmented dysarthric speech data. The data augmentation for dysarthric speech data is performed utilizing a virtual microphone and a multi-resolution-based feature extraction approach (VM-MRFE), previously proposed by the authors, to enhance the quantity of the target speech data and improve recognition accuracy. The recognized text is synthesized into intelligible speech using a hidden Markov model (HMM)-based text-to-speech synthesis system. To enhance accessibility, the recognizer and synthesizer systems are ported on to the raspberry pi platform, along with a collar microphone and loudspeaker. The real-time performance of the aid by the dysarthric user is examined, also, the aid provides speech communication, with recognition achieved in under 3 s and synthesis in 1.4 s, resulting in a speech delivery rate of roughly 4.4 s.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"92 ","pages":"Article 101777"},"PeriodicalIF":3.1000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230825000026","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Speech assistive aids are designed to enhance the intelligibility of speech, particularly for individuals with speech impairments such as dysarthria, by utilizing speech recognition and speech synthesis systems. The development of these devices promote independence and employability for dysarthric individuals facilitating their natural communication. However, the availability of speech assistive aids is limited due to various challenges, including the necessity to train a dysarthric speech recognition system tailored to the errors of dysarthric speakers, the portability required for use by any dysarthric individual with motor disorders, the need to sustain an adequate speech communication rate, and the financial implications associated with the development of such aids. To address this, in the current work a portable, affordable, and a personalized augmentative and alternative speech communication aid tailored to each dysarthric speaker’s need is developed. The dysarthric speech recognition system used in this aid is trained using a transfer learning approach, with normal speaker’s speech data as the source model and the target model includes the augmented dysarthric speech data. The data augmentation for dysarthric speech data is performed utilizing a virtual microphone and a multi-resolution-based feature extraction approach (VM-MRFE), previously proposed by the authors, to enhance the quantity of the target speech data and improve recognition accuracy. The recognized text is synthesized into intelligible speech using a hidden Markov model (HMM)-based text-to-speech synthesis system. To enhance accessibility, the recognizer and synthesizer systems are ported on to the raspberry pi platform, along with a collar microphone and loudspeaker. The real-time performance of the aid by the dysarthric user is examined, also, the aid provides speech communication, with recognition achieved in under 3 s and synthesis in 1.4 s, resulting in a speech delivery rate of roughly 4.4 s.
期刊介绍:
Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language.
The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.