Persian/Arabic Scene Text Recognition With Convolutional Recurrent Neural Network

IF 2.1 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS
Alireza Akoushideh, Atefeh Ranjkesh Rashtehroudi, Asadollah Shahbahrami
{"title":"Persian/Arabic Scene Text Recognition With Convolutional Recurrent Neural Network","authors":"Alireza Akoushideh,&nbsp;Atefeh Ranjkesh Rashtehroudi,&nbsp;Asadollah Shahbahrami","doi":"10.1049/smc2.70001","DOIUrl":null,"url":null,"abstract":"<p>With advancements in technology, natural scene text recognition (STR) has become a critical yet challenging field due to variations in fonts, colours, textures, illumination, and complex backgrounds. This research study focuses on optical character recognition (OCR) with a case study on Iranian signposts, traffic signs, and licence plates to convert text from images into editable formats. The proposed method combines a preprocessing stage, leveraging resizing, noise reduction, adaptive thresholding, and colour inversion, which significantly enhances image quality and facilitates accurate text recognition, with a deep-learning pipeline. The process begins with the CRAFT model for text detection, addressing limitations in Persian/Arabic alphabet representation in datasets, followed by CRNN for text recognition. These preprocessing techniques and the CRAFT component result in notable performance improvements, achieving 98.6% accuracy with training error rates reduced from 13.90% to 1.40% after 20 epochs. Additionally, the system's effectiveness is validated through Persian/Arabic-specific OCR criteria at both the character and word levels. Results indicate that preprocessing and deep learning integration improve reliability, paving the way for future applications in intelligent transportation systems and other domains requiring robust STR solutions. This study demonstrates the potential for further enhancements in OCR systems, particularly for complex, script-based languages.</p>","PeriodicalId":34740,"journal":{"name":"IET Smart Cities","volume":"7 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/smc2.70001","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Smart Cities","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/smc2.70001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

With advancements in technology, natural scene text recognition (STR) has become a critical yet challenging field due to variations in fonts, colours, textures, illumination, and complex backgrounds. This research study focuses on optical character recognition (OCR) with a case study on Iranian signposts, traffic signs, and licence plates to convert text from images into editable formats. The proposed method combines a preprocessing stage, leveraging resizing, noise reduction, adaptive thresholding, and colour inversion, which significantly enhances image quality and facilitates accurate text recognition, with a deep-learning pipeline. The process begins with the CRAFT model for text detection, addressing limitations in Persian/Arabic alphabet representation in datasets, followed by CRNN for text recognition. These preprocessing techniques and the CRAFT component result in notable performance improvements, achieving 98.6% accuracy with training error rates reduced from 13.90% to 1.40% after 20 epochs. Additionally, the system's effectiveness is validated through Persian/Arabic-specific OCR criteria at both the character and word levels. Results indicate that preprocessing and deep learning integration improve reliability, paving the way for future applications in intelligent transportation systems and other domains requiring robust STR solutions. This study demonstrates the potential for further enhancements in OCR systems, particularly for complex, script-based languages.

Abstract Image

求助全文
约1分钟内获得全文 求助全文
来源期刊
IET Smart Cities
IET Smart Cities Social Sciences-Urban Studies
CiteScore
7.70
自引率
3.20%
发文量
25
审稿时长
21 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信