{"title":"基于变换的越南文手写文字图像识别模型","authors":"Vinh-Loi Ly, T. Doan, N. Ly","doi":"10.1109/NICS51282.2020.9335877","DOIUrl":null,"url":null,"abstract":"Handwritten text recognition plays an important role in transforming handwritten-based documents into digital data, which is necessary to intellectualize social management and production processes in the fourth industrial revolution. To overcome this challenge, several recent studies have assumed that each character appeared on the image is independent so that they could make predictions solely on the visual features. However, it leads to a lack of language characteristics because of the fact that the occurrence of a character is somehow related to the previous characters. Therefore, the attention mechanism between the text and the image to create the character predictions sequentially have outperformed the above method on the word level because it could make use of the context of the predicting word text. In this paper, which is inspired by the Transformer architecture in Neural Machine Translation tasks, we further proposed a model that exploits the dependencies between the last predicted character and the previously predicted characters based on the attention mechanism to translate from the word image to a word text. Our method has achieved the state-of-the-art result with 2.48% CER and 7.70% WER on the VNOnDB-word data set compared to similar works on the same data set.","PeriodicalId":308944,"journal":{"name":"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Transformer-based model for Vietnamese Handwritten Word Image Recognition\",\"authors\":\"Vinh-Loi Ly, T. Doan, N. Ly\",\"doi\":\"10.1109/NICS51282.2020.9335877\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Handwritten text recognition plays an important role in transforming handwritten-based documents into digital data, which is necessary to intellectualize social management and production processes in the fourth industrial revolution. To overcome this challenge, several recent studies have assumed that each character appeared on the image is independent so that they could make predictions solely on the visual features. However, it leads to a lack of language characteristics because of the fact that the occurrence of a character is somehow related to the previous characters. Therefore, the attention mechanism between the text and the image to create the character predictions sequentially have outperformed the above method on the word level because it could make use of the context of the predicting word text. In this paper, which is inspired by the Transformer architecture in Neural Machine Translation tasks, we further proposed a model that exploits the dependencies between the last predicted character and the previously predicted characters based on the attention mechanism to translate from the word image to a word text. Our method has achieved the state-of-the-art result with 2.48% CER and 7.70% WER on the VNOnDB-word data set compared to similar works on the same data set.\",\"PeriodicalId\":308944,\"journal\":{\"name\":\"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NICS51282.2020.9335877\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NICS51282.2020.9335877","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Transformer-based model for Vietnamese Handwritten Word Image Recognition
Handwritten text recognition plays an important role in transforming handwritten-based documents into digital data, which is necessary to intellectualize social management and production processes in the fourth industrial revolution. To overcome this challenge, several recent studies have assumed that each character appeared on the image is independent so that they could make predictions solely on the visual features. However, it leads to a lack of language characteristics because of the fact that the occurrence of a character is somehow related to the previous characters. Therefore, the attention mechanism between the text and the image to create the character predictions sequentially have outperformed the above method on the word level because it could make use of the context of the predicting word text. In this paper, which is inspired by the Transformer architecture in Neural Machine Translation tasks, we further proposed a model that exploits the dependencies between the last predicted character and the previously predicted characters based on the attention mechanism to translate from the word image to a word text. Our method has achieved the state-of-the-art result with 2.48% CER and 7.70% WER on the VNOnDB-word data set compared to similar works on the same data set.