A. Khalil, Maha Yaghi, Tasnim Basmaji, Mohamed Faizal, Z. Farhan, Ali Ali, Mohammed Ghazal
{"title":"Mobile Deep Classification of UAE Banknotes for the Visually Challenged","authors":"A. Khalil, Maha Yaghi, Tasnim Basmaji, Mohamed Faizal, Z. Farhan, Ali Ali, Mohammed Ghazal","doi":"10.1109/FiCloud57274.2022.00053","DOIUrl":null,"url":null,"abstract":"This paper proposes an artificial intelligence-powered mobile application for currency recognition to assist sufferers of visual disabilities. The proposed application uses RCNN, a pre-trained MobileNet V2 convolutional neural network, transfer learning, hough transform, and text-to-speech reader service to detect and classify captured currency and generate an auditory signal. To train our AI model, we collect 700 ultra-high definition images from the United Arab Emirates banknotes. We include the front and back faces of each banknote from various distances, angles, and lighting conditions to avoid overfitting. When triggered, our mobile application initiates a capture of an image using the mobile camera. The image is then pre-processed and input to our on-device currency detector and classifier. We finally use text-to-speech to change the textual class into an audio signal played on the user’s Bluetooth earpiece. Our results show that our system can be an effective tool in helping the visually challenged identify and differentiate banknotes using increasingly available smartphones. Our banknote classification model was validated using test-set and 5-fold cross-validation methods and achieved an average accuracy of 70% and 88%, respectively.","PeriodicalId":349690,"journal":{"name":"2022 9th International Conference on Future Internet of Things and Cloud (FiCloud)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Future Internet of Things and Cloud (FiCloud)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FiCloud57274.2022.00053","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes an artificial intelligence-powered mobile application for currency recognition to assist sufferers of visual disabilities. The proposed application uses RCNN, a pre-trained MobileNet V2 convolutional neural network, transfer learning, hough transform, and text-to-speech reader service to detect and classify captured currency and generate an auditory signal. To train our AI model, we collect 700 ultra-high definition images from the United Arab Emirates banknotes. We include the front and back faces of each banknote from various distances, angles, and lighting conditions to avoid overfitting. When triggered, our mobile application initiates a capture of an image using the mobile camera. The image is then pre-processed and input to our on-device currency detector and classifier. We finally use text-to-speech to change the textual class into an audio signal played on the user’s Bluetooth earpiece. Our results show that our system can be an effective tool in helping the visually challenged identify and differentiate banknotes using increasingly available smartphones. Our banknote classification model was validated using test-set and 5-fold cross-validation methods and achieved an average accuracy of 70% and 88%, respectively.