Alliah May Eugenio, Marian Jowie Patulot, Lykha Jane Seguiro, Angelica Nicole Tuazon, Shekinah Lor B. Huyo-a, Mideth B. Abisado, G. Sampedro
{"title":"EyeRis: Visual Image Recognition using Machine Learning for the Visually-Impaired","authors":"Alliah May Eugenio, Marian Jowie Patulot, Lykha Jane Seguiro, Angelica Nicole Tuazon, Shekinah Lor B. Huyo-a, Mideth B. Abisado, G. Sampedro","doi":"10.1109/ICEIC57457.2023.10049927","DOIUrl":null,"url":null,"abstract":"Visually impaired people struggle daily and have difficulty recognizing and distinguishing objects around them. Thus, they mainly depend on supervision from other people to assist them. Since smartphones have become a necessity in this modern world, the researchers formulated a solution to help the visually impaired through a machine learning-based mobile application for object recognition. Nowadays, software applications can provide accurate findings in picture classification and processing processes thanks to machine learning techniques and algorithms. In this study, the researchers use a convolutional neural network (CNN)-based system on TensorFlow Lite to create a mobile version of a visual information system employing a machine learning strategy and deep learning framework. The main objectives of the smartphone application, EyeRis, is to recognize and categorize items in real-time and to separate photographs from the user-selected scenarios. Results were analyzed and contrasted based on the app’s obtained recognition accuracy data. It demonstrated the utility of CNN as a model for image recognition algorithms.","PeriodicalId":373752,"journal":{"name":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEIC57457.2023.10049927","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Visually impaired people struggle daily and have difficulty recognizing and distinguishing objects around them. Thus, they mainly depend on supervision from other people to assist them. Since smartphones have become a necessity in this modern world, the researchers formulated a solution to help the visually impaired through a machine learning-based mobile application for object recognition. Nowadays, software applications can provide accurate findings in picture classification and processing processes thanks to machine learning techniques and algorithms. In this study, the researchers use a convolutional neural network (CNN)-based system on TensorFlow Lite to create a mobile version of a visual information system employing a machine learning strategy and deep learning framework. The main objectives of the smartphone application, EyeRis, is to recognize and categorize items in real-time and to separate photographs from the user-selected scenarios. Results were analyzed and contrasted based on the app’s obtained recognition accuracy data. It demonstrated the utility of CNN as a model for image recognition algorithms.