Siwar Rekik, Lamya Alaqeel, Roaa Kordi, Sara Al-Rashoud
{"title":"Visually Impaired Assistance with Arabic Speech Recognition on GPS","authors":"Siwar Rekik, Lamya Alaqeel, Roaa Kordi, Sara Al-Rashoud","doi":"10.1109/ICCIS49240.2020.9257692","DOIUrl":null,"url":null,"abstract":"People who have impaired vision regularly need a guide to assist in obstacle avoidance. Several electronic devices are currently used to provide guidance for a remote location. One of the latest trends in technology is Automatic Speech Recognition (ASR) which has become a primary communication tool for special needs people such as visually impaired and blind people. Nowadays, these people in Saudi Arabia could not find public places offering services such as braille readings on menus and flyers, sound facilities, ease of movements, health care and so on. Our application (Ayn) provides a database of several locations and all the information needed. This project investigated the suitability of a user-centered and client-server approach for the development of a talking GPS planned to fill a niche for outdoor wayfinding. We highlight the importance of having more places serving blind people in public places such as restaurants, centers, hospitals, parks … etc. The developed application used a speech-recognition speech-synthesis interface. The prototype solution incorporates a custom web application that accesses the Google Maps API. The system is intended to be scalable and extensible with additional features. The quality of Arabic speech recognition is improved over Google Speech Recognition API for Arabic using one of the machine learning algorithms: Artificial Neural Network (ANN).","PeriodicalId":425637,"journal":{"name":"2020 2nd International Conference on Computer and Information Sciences (ICCIS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 2nd International Conference on Computer and Information Sciences (ICCIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIS49240.2020.9257692","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
People who have impaired vision regularly need a guide to assist in obstacle avoidance. Several electronic devices are currently used to provide guidance for a remote location. One of the latest trends in technology is Automatic Speech Recognition (ASR) which has become a primary communication tool for special needs people such as visually impaired and blind people. Nowadays, these people in Saudi Arabia could not find public places offering services such as braille readings on menus and flyers, sound facilities, ease of movements, health care and so on. Our application (Ayn) provides a database of several locations and all the information needed. This project investigated the suitability of a user-centered and client-server approach for the development of a talking GPS planned to fill a niche for outdoor wayfinding. We highlight the importance of having more places serving blind people in public places such as restaurants, centers, hospitals, parks … etc. The developed application used a speech-recognition speech-synthesis interface. The prototype solution incorporates a custom web application that accesses the Google Maps API. The system is intended to be scalable and extensible with additional features. The quality of Arabic speech recognition is improved over Google Speech Recognition API for Arabic using one of the machine learning algorithms: Artificial Neural Network (ANN).