{"title":"Improvement of video text recognition by character selection","authors":"T. Mita, O. Hori","doi":"10.1109/ICDAR.2001.953954","DOIUrl":null,"url":null,"abstract":"This paper proposes a new method for improving the recognition accuracy of video text by exploiting the temporal redundancy of video. The proposed method divides the video into short segments and obtains several recognition results from some video segments. The video segments have various backgrounds because background image changes temporally due to camera-work or object motion. These recognition results from diverse backgrounds are integrated into a single text string after selecting the best recognition results of individual characters. The proposed method was tested on a large set of news video sequences. Experimental results show that the proposed method increased the number of correct characters by 3.1% and the number of strings which do not include any recognition errors by 8.1%.","PeriodicalId":277816,"journal":{"name":"Proceedings of Sixth International Conference on Document Analysis and Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of Sixth International Conference on Document Analysis and Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDAR.2001.953954","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21
Abstract
This paper proposes a new method for improving the recognition accuracy of video text by exploiting the temporal redundancy of video. The proposed method divides the video into short segments and obtains several recognition results from some video segments. The video segments have various backgrounds because background image changes temporally due to camera-work or object motion. These recognition results from diverse backgrounds are integrated into a single text string after selecting the best recognition results of individual characters. The proposed method was tested on a large set of news video sequences. Experimental results show that the proposed method increased the number of correct characters by 3.1% and the number of strings which do not include any recognition errors by 8.1%.