{"title":"文本规范化的分类方法","authors":"Guozhang Zhao, Chenkai Ma, Wenxian Feng, Rui Zhang","doi":"10.1109/AEMCSE50948.2020.00125","DOIUrl":null,"url":null,"abstract":"We propose a new model for text normalization: GRFE (Gated Recurrent Feature Extractor). With neural network GRU, it classifies the token into predefined types such as date, time, digit. and then normalized the tokens according to domain knowledge. GRFE can avoid many \"silly errors\" such as it won't normalize '17' as 'eighteen' or blending British English and American English in Date, and enhance the robustness and extendibility of the network. Experiments show that compared with the previous models, GRFE exploits less parameters and fewer layers. The number of parameters of GRFE is 30.69% of LSTM and 34.96% of CFE (Causal Feature Extractor). It takes less training time to achieve a better accuracy (92.77%).","PeriodicalId":246841,"journal":{"name":"2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Classification Approach to Text Normalization\",\"authors\":\"Guozhang Zhao, Chenkai Ma, Wenxian Feng, Rui Zhang\",\"doi\":\"10.1109/AEMCSE50948.2020.00125\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a new model for text normalization: GRFE (Gated Recurrent Feature Extractor). With neural network GRU, it classifies the token into predefined types such as date, time, digit. and then normalized the tokens according to domain knowledge. GRFE can avoid many \\\"silly errors\\\" such as it won't normalize '17' as 'eighteen' or blending British English and American English in Date, and enhance the robustness and extendibility of the network. Experiments show that compared with the previous models, GRFE exploits less parameters and fewer layers. The number of parameters of GRFE is 30.69% of LSTM and 34.96% of CFE (Causal Feature Extractor). It takes less training time to achieve a better accuracy (92.77%).\",\"PeriodicalId\":246841,\"journal\":{\"name\":\"2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AEMCSE50948.2020.00125\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AEMCSE50948.2020.00125","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We propose a new model for text normalization: GRFE (Gated Recurrent Feature Extractor). With neural network GRU, it classifies the token into predefined types such as date, time, digit. and then normalized the tokens according to domain knowledge. GRFE can avoid many "silly errors" such as it won't normalize '17' as 'eighteen' or blending British English and American English in Date, and enhance the robustness and extendibility of the network. Experiments show that compared with the previous models, GRFE exploits less parameters and fewer layers. The number of parameters of GRFE is 30.69% of LSTM and 34.96% of CFE (Causal Feature Extractor). It takes less training time to achieve a better accuracy (92.77%).