{"title":"促进语音识别的神经进化技术","authors":"Mohshin Uddin Anwar, Md Liakot Ali","doi":"10.1109/ECACE.2019.8679206","DOIUrl":null,"url":null,"abstract":"For long many years, various speech signal processing techniques have been experimented and optimized using expectation maximization, gradient descent optimization or their variations across end-to-end speech feature extraction and recognition scheme, but the result was below the satisfactory limit despite multitude of time, cost and effort have been invested. Very recently, huge improvement of computing power of devices made it possible to use complex multi-layered neural network technologies (i.e. deep learning or deep neural network) such as convolutional net, long short term memory, bidirectional recurrent neural network as well as complex statistical or evolutionary strategies and its variations to optimize further the results to reduce the error rates. This paper emphasizes on how to devise an efficient technique that would reduce the time, cost and complexity over the deep learning methods with the guidance of genetic algorithm (GA) through intelligently choosing hyper-parameters of the networks. It has been identified that series of iterations to estimate, tune and re-estimate the hyper-parameters can lead to substantial improvement even with the least computing power, compared to one-go implementation of genetic algorithms done earlier.","PeriodicalId":226060,"journal":{"name":"2019 International Conference on Electrical, Computer and Communication Engineering (ECCE)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Boosting Neuro Evolutionary Techniques for Speech Recognition\",\"authors\":\"Mohshin Uddin Anwar, Md Liakot Ali\",\"doi\":\"10.1109/ECACE.2019.8679206\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For long many years, various speech signal processing techniques have been experimented and optimized using expectation maximization, gradient descent optimization or their variations across end-to-end speech feature extraction and recognition scheme, but the result was below the satisfactory limit despite multitude of time, cost and effort have been invested. Very recently, huge improvement of computing power of devices made it possible to use complex multi-layered neural network technologies (i.e. deep learning or deep neural network) such as convolutional net, long short term memory, bidirectional recurrent neural network as well as complex statistical or evolutionary strategies and its variations to optimize further the results to reduce the error rates. This paper emphasizes on how to devise an efficient technique that would reduce the time, cost and complexity over the deep learning methods with the guidance of genetic algorithm (GA) through intelligently choosing hyper-parameters of the networks. It has been identified that series of iterations to estimate, tune and re-estimate the hyper-parameters can lead to substantial improvement even with the least computing power, compared to one-go implementation of genetic algorithms done earlier.\",\"PeriodicalId\":226060,\"journal\":{\"name\":\"2019 International Conference on Electrical, Computer and Communication Engineering (ECCE)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on Electrical, Computer and Communication Engineering (ECCE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ECACE.2019.8679206\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Electrical, Computer and Communication Engineering (ECCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ECACE.2019.8679206","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Boosting Neuro Evolutionary Techniques for Speech Recognition
For long many years, various speech signal processing techniques have been experimented and optimized using expectation maximization, gradient descent optimization or their variations across end-to-end speech feature extraction and recognition scheme, but the result was below the satisfactory limit despite multitude of time, cost and effort have been invested. Very recently, huge improvement of computing power of devices made it possible to use complex multi-layered neural network technologies (i.e. deep learning or deep neural network) such as convolutional net, long short term memory, bidirectional recurrent neural network as well as complex statistical or evolutionary strategies and its variations to optimize further the results to reduce the error rates. This paper emphasizes on how to devise an efficient technique that would reduce the time, cost and complexity over the deep learning methods with the guidance of genetic algorithm (GA) through intelligently choosing hyper-parameters of the networks. It has been identified that series of iterations to estimate, tune and re-estimate the hyper-parameters can lead to substantial improvement even with the least computing power, compared to one-go implementation of genetic algorithms done earlier.