Persian/Arabic Scene Text Recognition With Convolutional Recurrent Neural Network

IF 2.1 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS
Alireza Akoushideh, Atefeh Ranjkesh Rashtehroudi, Asadollah Shahbahrami
{"title":"Persian/Arabic Scene Text Recognition With Convolutional Recurrent Neural Network","authors":"Alireza Akoushideh,&nbsp;Atefeh Ranjkesh Rashtehroudi,&nbsp;Asadollah Shahbahrami","doi":"10.1049/smc2.70001","DOIUrl":null,"url":null,"abstract":"<p>With advancements in technology, natural scene text recognition (STR) has become a critical yet challenging field due to variations in fonts, colours, textures, illumination, and complex backgrounds. This research study focuses on optical character recognition (OCR) with a case study on Iranian signposts, traffic signs, and licence plates to convert text from images into editable formats. The proposed method combines a preprocessing stage, leveraging resizing, noise reduction, adaptive thresholding, and colour inversion, which significantly enhances image quality and facilitates accurate text recognition, with a deep-learning pipeline. The process begins with the CRAFT model for text detection, addressing limitations in Persian/Arabic alphabet representation in datasets, followed by CRNN for text recognition. These preprocessing techniques and the CRAFT component result in notable performance improvements, achieving 98.6% accuracy with training error rates reduced from 13.90% to 1.40% after 20 epochs. Additionally, the system's effectiveness is validated through Persian/Arabic-specific OCR criteria at both the character and word levels. Results indicate that preprocessing and deep learning integration improve reliability, paving the way for future applications in intelligent transportation systems and other domains requiring robust STR solutions. This study demonstrates the potential for further enhancements in OCR systems, particularly for complex, script-based languages.</p>","PeriodicalId":34740,"journal":{"name":"IET Smart Cities","volume":"7 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/smc2.70001","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Smart Cities","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/smc2.70001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

With advancements in technology, natural scene text recognition (STR) has become a critical yet challenging field due to variations in fonts, colours, textures, illumination, and complex backgrounds. This research study focuses on optical character recognition (OCR) with a case study on Iranian signposts, traffic signs, and licence plates to convert text from images into editable formats. The proposed method combines a preprocessing stage, leveraging resizing, noise reduction, adaptive thresholding, and colour inversion, which significantly enhances image quality and facilitates accurate text recognition, with a deep-learning pipeline. The process begins with the CRAFT model for text detection, addressing limitations in Persian/Arabic alphabet representation in datasets, followed by CRNN for text recognition. These preprocessing techniques and the CRAFT component result in notable performance improvements, achieving 98.6% accuracy with training error rates reduced from 13.90% to 1.40% after 20 epochs. Additionally, the system's effectiveness is validated through Persian/Arabic-specific OCR criteria at both the character and word levels. Results indicate that preprocessing and deep learning integration improve reliability, paving the way for future applications in intelligent transportation systems and other domains requiring robust STR solutions. This study demonstrates the potential for further enhancements in OCR systems, particularly for complex, script-based languages.

Abstract Image

用卷积递归神经网络识别波斯语/阿拉伯语场景文本
随着技术的进步,由于字体、颜色、纹理、照明和复杂背景的变化,自然场景文本识别(STR)已成为一个关键但具有挑战性的领域。本研究的重点是光学字符识别(OCR),并以伊朗路标、交通标志和车牌为例,将文本从图像转换为可编辑格式。该方法结合了预处理阶段,利用调整大小、降噪、自适应阈值和颜色反转,通过深度学习管道显著提高了图像质量,促进了准确的文本识别。该过程从用于文本检测的CRAFT模型开始,解决数据集中波斯语/阿拉伯语字母表示的限制,然后是用于文本识别的CRNN。这些预处理技术和CRAFT组件带来了显著的性能改进,在20次迭代后,训练错误率从13.90%降低到1.40%,准确率达到98.6%。此外,系统的有效性通过波斯语/阿拉伯语特定的OCR标准在字符和单词级别进行验证。结果表明,预处理和深度学习的融合提高了可靠性,为未来在智能交通系统和其他需要强大STR解决方案的领域的应用铺平了道路。这项研究展示了OCR系统进一步增强的潜力,特别是对于复杂的、基于脚本的语言。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IET Smart Cities
IET Smart Cities Social Sciences-Urban Studies
CiteScore
7.70
自引率
3.20%
发文量
25
审稿时长
21 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信