Detection of Touchscreen-Based Urdu Braille Characters Using Machine Learning Techniques

S. Shokat, R. Riaz, S. S. Rizvi, Inayat Khan, Anand Paul
{"title":"Detection of Touchscreen-Based Urdu Braille Characters Using Machine Learning Techniques","authors":"S. Shokat, R. Riaz, S. S. Rizvi, Inayat Khan, Anand Paul","doi":"10.1155/2021/7211419","DOIUrl":null,"url":null,"abstract":"Revolution in technology is changing the way visually impaired people read and write Braille easily. Learning Braille in its native language can be more convenient for its users. This study proposes an improved backend processing algorithm for an earlier developed touchscreen-based Braille text entry application. This application is used to collect Urdu Braille data, which is then converted to Urdu text. Braille to text conversion has been done on Hindi, Arabic, Bangla, Chinese, English, and other languages. For this study, Urdu Braille Grade 1 data were collected with multiclass (39 characters of Urdu represented by class 1, Alif (ﺍ), to class 39, Bri Yay (ے). Total (N = 144) cases for each class were collected. The dataset was collected from visually impaired students from The National Special Education School. Visually impaired users entered the Urdu Braille alphabets using touchscreen devices. The final dataset contained (N = 5638) cases. Reconstruction Independent Component Analysis (RICA)-based feature extraction model is created for Braille to Urdu text classification. The multiclass was categorized into three groups (13 each), i.e., category-1 (1–13), Alif-Zaal (ﺫ - ﺍ), category-2 (14–26), Ray-Fay (ﻒ - ﺮ), and category-3 (27–39), Kaaf-Bri Yay (ے - ﻕ), to give better vision and understanding. The performance was evaluated in terms of true positive rate, true negative rate, positive predictive value, negative predictive value, false positive rate, total accuracy, and area under the receiver operating curve. Among all the classifiers, support vector machine has achieved the highest performance with a 99.73% accuracy. For comparisons, robust machine learning techniques, such as support vector machine, decision tree, and K-nearest neighbors were used. Currently, this work has been done on only Grade 1 Urdu Braille. In the future, we plan to enhance this work using Grade 2 Urdu Braille with text and speech feedback on touchscreen-based android phones.","PeriodicalId":18790,"journal":{"name":"Mob. Inf. Syst.","volume":"8 1","pages":"7211419:1-7211419:16"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mob. Inf. Syst.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2021/7211419","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Revolution in technology is changing the way visually impaired people read and write Braille easily. Learning Braille in its native language can be more convenient for its users. This study proposes an improved backend processing algorithm for an earlier developed touchscreen-based Braille text entry application. This application is used to collect Urdu Braille data, which is then converted to Urdu text. Braille to text conversion has been done on Hindi, Arabic, Bangla, Chinese, English, and other languages. For this study, Urdu Braille Grade 1 data were collected with multiclass (39 characters of Urdu represented by class 1, Alif (ﺍ), to class 39, Bri Yay (ے). Total (N = 144) cases for each class were collected. The dataset was collected from visually impaired students from The National Special Education School. Visually impaired users entered the Urdu Braille alphabets using touchscreen devices. The final dataset contained (N = 5638) cases. Reconstruction Independent Component Analysis (RICA)-based feature extraction model is created for Braille to Urdu text classification. The multiclass was categorized into three groups (13 each), i.e., category-1 (1–13), Alif-Zaal (ﺫ - ﺍ), category-2 (14–26), Ray-Fay (ﻒ - ﺮ), and category-3 (27–39), Kaaf-Bri Yay (ے - ﻕ), to give better vision and understanding. The performance was evaluated in terms of true positive rate, true negative rate, positive predictive value, negative predictive value, false positive rate, total accuracy, and area under the receiver operating curve. Among all the classifiers, support vector machine has achieved the highest performance with a 99.73% accuracy. For comparisons, robust machine learning techniques, such as support vector machine, decision tree, and K-nearest neighbors were used. Currently, this work has been done on only Grade 1 Urdu Braille. In the future, we plan to enhance this work using Grade 2 Urdu Braille with text and speech feedback on touchscreen-based android phones.
基于触摸屏乌尔都语盲文字符的机器学习检测
技术革命正在改变视障人士轻松阅读和书写盲文的方式。用母语学习盲文对使用者来说更方便。本研究针对先前开发的触屏盲文输入应用,提出一种改进后端处理算法。此应用程序用于收集乌尔都语盲文数据,然后将其转换为乌尔都语文本。盲文到文本的转换已经在印地语、阿拉伯语、孟加拉语、中文、英语和其他语言上完成。在本研究中,乌尔都盲文1级数据收集了多类(39个字符)乌尔都语,由1类Alif()到39类Bri Yay()表示。每类共收集病例144例。数据集来自国家特殊教育学校的视障学生。视障用户使用触屏设备输入乌尔都语盲文字母。最终的数据集包含(N = 5638)个案例。建立了基于重构独立分量分析(RICA)的盲文到乌尔都语文本分类特征提取模型。为了更好地理解和理解,将多类分为三类(每组13个),即第一类(1-13)、Alif-Zaal(-)、第二类(14-26)、Ray-Fay (ﻒ -)和第三类(27-39)、Kaaf-Bri Yay(-)。根据真阳性率、真阴性率、阳性预测值、阴性预测值、假阳性率、总准确率和受试者工作曲线下面积进行评价。在所有分类器中,支持向量机以99.73%的准确率达到了最高的性能。为了进行比较,我们使用了鲁棒的机器学习技术,如支持向量机、决策树和k近邻。目前,这项工作只在1年级乌尔都语盲文上完成。在未来,我们计划在基于触摸屏的android手机上使用具有文本和语音反馈的2级乌尔都盲文来增强这项工作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信