Mobile Deep Classification of UAE Banknotes for the Visually Challenged

A. Khalil, Maha Yaghi, Tasnim Basmaji, Mohamed Faizal, Z. Farhan, Ali Ali, Mohammed Ghazal
{"title":"Mobile Deep Classification of UAE Banknotes for the Visually Challenged","authors":"A. Khalil, Maha Yaghi, Tasnim Basmaji, Mohamed Faizal, Z. Farhan, Ali Ali, Mohammed Ghazal","doi":"10.1109/FiCloud57274.2022.00053","DOIUrl":null,"url":null,"abstract":"This paper proposes an artificial intelligence-powered mobile application for currency recognition to assist sufferers of visual disabilities. The proposed application uses RCNN, a pre-trained MobileNet V2 convolutional neural network, transfer learning, hough transform, and text-to-speech reader service to detect and classify captured currency and generate an auditory signal. To train our AI model, we collect 700 ultra-high definition images from the United Arab Emirates banknotes. We include the front and back faces of each banknote from various distances, angles, and lighting conditions to avoid overfitting. When triggered, our mobile application initiates a capture of an image using the mobile camera. The image is then pre-processed and input to our on-device currency detector and classifier. We finally use text-to-speech to change the textual class into an audio signal played on the user’s Bluetooth earpiece. Our results show that our system can be an effective tool in helping the visually challenged identify and differentiate banknotes using increasingly available smartphones. Our banknote classification model was validated using test-set and 5-fold cross-validation methods and achieved an average accuracy of 70% and 88%, respectively.","PeriodicalId":349690,"journal":{"name":"2022 9th International Conference on Future Internet of Things and Cloud (FiCloud)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Future Internet of Things and Cloud (FiCloud)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FiCloud57274.2022.00053","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper proposes an artificial intelligence-powered mobile application for currency recognition to assist sufferers of visual disabilities. The proposed application uses RCNN, a pre-trained MobileNet V2 convolutional neural network, transfer learning, hough transform, and text-to-speech reader service to detect and classify captured currency and generate an auditory signal. To train our AI model, we collect 700 ultra-high definition images from the United Arab Emirates banknotes. We include the front and back faces of each banknote from various distances, angles, and lighting conditions to avoid overfitting. When triggered, our mobile application initiates a capture of an image using the mobile camera. The image is then pre-processed and input to our on-device currency detector and classifier. We finally use text-to-speech to change the textual class into an audio signal played on the user’s Bluetooth earpiece. Our results show that our system can be an effective tool in helping the visually challenged identify and differentiate banknotes using increasingly available smartphones. Our banknote classification model was validated using test-set and 5-fold cross-validation methods and achieved an average accuracy of 70% and 88%, respectively.
阿联酋纸币的视觉障碍移动深度分类
本文提出了一种人工智能驱动的移动货币识别应用程序,以帮助视力障碍患者。该应用程序使用RCNN(预训练的MobileNet V2卷积神经网络)、迁移学习、hough变换和文本到语音阅读器服务来检测和分类捕获的货币,并生成听觉信号。为了训练我们的人工智能模型,我们从阿拉伯联合酋长国的纸币中收集了700张超高清图像。我们从不同的距离、角度和光照条件下绘制每张钞票的正面和背面,以避免过度拟合。当触发时,我们的移动应用程序使用移动相机启动图像捕获。然后对图像进行预处理并输入到设备上的货币检测器和分类器。最后,我们使用文本到语音将文本类转换为在用户的蓝牙耳机上播放的音频信号。我们的研究结果表明,我们的系统可以成为一种有效的工具,帮助视力有障碍的人通过越来越多的智能手机识别和区分纸币。我们的纸币分类模型采用测试集和5倍交叉验证方法进行验证,平均准确率分别达到70%和88%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信