A Chinese Speech Recognition System Based on Binary Neural Network and Pre-processing

Lunyi Guo, Yiming Deng, Liang Tang, Ronggeng Fan, Bo Yan, Zhuoling Xiao
{"title":"A Chinese Speech Recognition System Based on Binary Neural Network and Pre-processing","authors":"Lunyi Guo, Yiming Deng, Liang Tang, Ronggeng Fan, Bo Yan, Zhuoling Xiao","doi":"10.1109/WCCCT56755.2023.10052123","DOIUrl":null,"url":null,"abstract":"Neural networks have made excellent progress in the field of speech recognition. However, more research needs to be done in some scenarios where computational resources are limited or real-time, and low power consumption is required. In this paper, we propose a lightweight speech recognition model based on pre-processing + binary neural network, which can significantly reduce the number of weight parameters while ensuring an acceptable error rate. The speech pre-processing part converts the 1D speech signal to the 2D Mel spectrum and uses Voice Activate Detection (VAD) to make the speech Mel spectrum input variable. The speech data set is also expanded using data augmentation methods. For convolutional layers, the weights are binarized to reduce the number of model parameters and improve computational and storage efficiency. The number of model parameters after quantization is 6.94% of the number of full precision model parameters, and the error rate on the ST CMD speech dataset increases by only 2.07%. Finally, a circuit structure based on binary weights for convolutional computation is designed, and a single multiplication can be implemented using only the hardware resources of the 7 Look Up Table (LUT).","PeriodicalId":112978,"journal":{"name":"2023 6th World Conference on Computing and Communication Technologies (WCCCT)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 6th World Conference on Computing and Communication Technologies (WCCCT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WCCCT56755.2023.10052123","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Neural networks have made excellent progress in the field of speech recognition. However, more research needs to be done in some scenarios where computational resources are limited or real-time, and low power consumption is required. In this paper, we propose a lightweight speech recognition model based on pre-processing + binary neural network, which can significantly reduce the number of weight parameters while ensuring an acceptable error rate. The speech pre-processing part converts the 1D speech signal to the 2D Mel spectrum and uses Voice Activate Detection (VAD) to make the speech Mel spectrum input variable. The speech data set is also expanded using data augmentation methods. For convolutional layers, the weights are binarized to reduce the number of model parameters and improve computational and storage efficiency. The number of model parameters after quantization is 6.94% of the number of full precision model parameters, and the error rate on the ST CMD speech dataset increases by only 2.07%. Finally, a circuit structure based on binary weights for convolutional computation is designed, and a single multiplication can be implemented using only the hardware resources of the 7 Look Up Table (LUT).
基于二值神经网络和预处理的汉语语音识别系统
神经网络在语音识别领域取得了优异的进展。然而,在一些计算资源有限或实时且需要低功耗的场景中,需要进行更多的研究。在本文中,我们提出了一种基于预处理+二值神经网络的轻量级语音识别模型,该模型可以在保证可接受错误率的同时显著减少权重参数的数量。语音预处理部分将一维语音信号转换为二维梅尔谱,并使用语音激活检测(Voice Activate Detection, VAD)使语音梅尔谱输入变量。使用数据增强方法扩展语音数据集。对于卷积层,权值进行二值化,以减少模型参数的数量,提高计算和存储效率。量化后的模型参数数量为全精度模型参数数量的6.94%,在ST CMD语音数据集上的错误率仅提高了2.07%。最后,设计了一种基于二元权值的卷积计算电路结构,仅利用7查找表(LUT)的硬件资源即可实现一次乘法运算。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信