{"title":"Persian/Arabic Scene Text Recognition With Convolutional Recurrent Neural Network","authors":"Alireza Akoushideh, Atefeh Ranjkesh Rashtehroudi, Asadollah Shahbahrami","doi":"10.1049/smc2.70001","DOIUrl":null,"url":null,"abstract":"<p>With advancements in technology, natural scene text recognition (STR) has become a critical yet challenging field due to variations in fonts, colours, textures, illumination, and complex backgrounds. This research study focuses on optical character recognition (OCR) with a case study on Iranian signposts, traffic signs, and licence plates to convert text from images into editable formats. The proposed method combines a preprocessing stage, leveraging resizing, noise reduction, adaptive thresholding, and colour inversion, which significantly enhances image quality and facilitates accurate text recognition, with a deep-learning pipeline. The process begins with the CRAFT model for text detection, addressing limitations in Persian/Arabic alphabet representation in datasets, followed by CRNN for text recognition. These preprocessing techniques and the CRAFT component result in notable performance improvements, achieving 98.6% accuracy with training error rates reduced from 13.90% to 1.40% after 20 epochs. Additionally, the system's effectiveness is validated through Persian/Arabic-specific OCR criteria at both the character and word levels. Results indicate that preprocessing and deep learning integration improve reliability, paving the way for future applications in intelligent transportation systems and other domains requiring robust STR solutions. This study demonstrates the potential for further enhancements in OCR systems, particularly for complex, script-based languages.</p>","PeriodicalId":34740,"journal":{"name":"IET Smart Cities","volume":"7 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/smc2.70001","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Smart Cities","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/smc2.70001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
With advancements in technology, natural scene text recognition (STR) has become a critical yet challenging field due to variations in fonts, colours, textures, illumination, and complex backgrounds. This research study focuses on optical character recognition (OCR) with a case study on Iranian signposts, traffic signs, and licence plates to convert text from images into editable formats. The proposed method combines a preprocessing stage, leveraging resizing, noise reduction, adaptive thresholding, and colour inversion, which significantly enhances image quality and facilitates accurate text recognition, with a deep-learning pipeline. The process begins with the CRAFT model for text detection, addressing limitations in Persian/Arabic alphabet representation in datasets, followed by CRNN for text recognition. These preprocessing techniques and the CRAFT component result in notable performance improvements, achieving 98.6% accuracy with training error rates reduced from 13.90% to 1.40% after 20 epochs. Additionally, the system's effectiveness is validated through Persian/Arabic-specific OCR criteria at both the character and word levels. Results indicate that preprocessing and deep learning integration improve reliability, paving the way for future applications in intelligent transportation systems and other domains requiring robust STR solutions. This study demonstrates the potential for further enhancements in OCR systems, particularly for complex, script-based languages.