{"title":"Deep convolutional neural network for detection of pathological speech","authors":"L. Vavrek, Matej Hires, D. Kumar, P. Drotár","doi":"10.1109/SAMI50585.2021.9378656","DOIUrl":null,"url":null,"abstract":"This paper describes the investigation of the use of the deep neural networks (DNN) for the detection of pathological speech. The state-of-the-art VGG16 convolutional neural network based transfer learning was the basis of this work and different approaches were trialed. We tested the different architectures using the Saarbrucken Voice database (SVD). To overcome limitations due to language and education, the SVD was limited to /a/, /i/ and /u/ vowel subsets with sustained natural pitch. The scope of this study was only diseases that classify as organic dysphonia. We utilized multiple simple networks trained separately on different vowel subsets and combined them as a single model ensemble. It was found that model ensemble achieved an accuracy on pathological speech detection of 82 %. Thus, our results show that pre-trained convolutional neural networks can be used for transfer learning when input is the spectrogram representation of the voice signal. This is significant because it overcomes the need for very large data size that is required to train DNN, and is suitable for computerized analysis of the speech without limitation of the language skills of the patients.","PeriodicalId":402414,"journal":{"name":"2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SAMI50585.2021.9378656","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This paper describes the investigation of the use of the deep neural networks (DNN) for the detection of pathological speech. The state-of-the-art VGG16 convolutional neural network based transfer learning was the basis of this work and different approaches were trialed. We tested the different architectures using the Saarbrucken Voice database (SVD). To overcome limitations due to language and education, the SVD was limited to /a/, /i/ and /u/ vowel subsets with sustained natural pitch. The scope of this study was only diseases that classify as organic dysphonia. We utilized multiple simple networks trained separately on different vowel subsets and combined them as a single model ensemble. It was found that model ensemble achieved an accuracy on pathological speech detection of 82 %. Thus, our results show that pre-trained convolutional neural networks can be used for transfer learning when input is the spectrogram representation of the voice signal. This is significant because it overcomes the need for very large data size that is required to train DNN, and is suitable for computerized analysis of the speech without limitation of the language skills of the patients.