{"title":"基于深度学习的视障人士智能辅助框架","authors":"Y. Muhammad, M. Jan, Spyridon Mastorakis, B. Zada","doi":"10.1109/COINS54846.2022.9854984","DOIUrl":null,"url":null,"abstract":"According to the World Health Organization (WHO), there are millions of visually impaired people in the world who face a lot of difficulties in moving independently. They always need help from people with normal sight. The capability to find their way to their intended destination in an unseen place is a major challenge for visually impaired people. This paper aimed to assist these individuals in resolving their problems with moving to any place on their own. To this end, we developed an intelligent system for visually impaired people using a deep learning (DL) algorithm, i.e., convolutional neural network (CNN) architecture, AlexNet, to recognize the situation and scene objects automatically in real-time. The proposed system consists of a Raspberry Pi, ultrasonic sensors, a camera, breadboards, jumper wires, a buzzer, and headphones. Breadboards are used to connect the sensors with the help of a Raspberry Pi and jumper wires. The sensors are used for the detection of obstacles and potholes, while the camera performs as a virtual eye for the visually impaired people by recognizing these obstacles in any direction (front, left, and right). The proposed system provides information about objects to a blind person. The system automatically calculates the distance between the blind person and the obstacle that how far he/she is from the obstacle. Furthermore, a voice message alerts the blind person about the obstacle and directs him/her via earphones. The obtained experimental results show that the utilized CNN architecture AlexNet yielded an impressive result of 99.56% validation accuracy and has a validation loss of 0.0201%.","PeriodicalId":187055,"journal":{"name":"2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Deep Learning-Based Smart Assistive Framework for Visually Impaired People\",\"authors\":\"Y. Muhammad, M. Jan, Spyridon Mastorakis, B. Zada\",\"doi\":\"10.1109/COINS54846.2022.9854984\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"According to the World Health Organization (WHO), there are millions of visually impaired people in the world who face a lot of difficulties in moving independently. They always need help from people with normal sight. The capability to find their way to their intended destination in an unseen place is a major challenge for visually impaired people. This paper aimed to assist these individuals in resolving their problems with moving to any place on their own. To this end, we developed an intelligent system for visually impaired people using a deep learning (DL) algorithm, i.e., convolutional neural network (CNN) architecture, AlexNet, to recognize the situation and scene objects automatically in real-time. The proposed system consists of a Raspberry Pi, ultrasonic sensors, a camera, breadboards, jumper wires, a buzzer, and headphones. Breadboards are used to connect the sensors with the help of a Raspberry Pi and jumper wires. The sensors are used for the detection of obstacles and potholes, while the camera performs as a virtual eye for the visually impaired people by recognizing these obstacles in any direction (front, left, and right). The proposed system provides information about objects to a blind person. The system automatically calculates the distance between the blind person and the obstacle that how far he/she is from the obstacle. Furthermore, a voice message alerts the blind person about the obstacle and directs him/her via earphones. The obtained experimental results show that the utilized CNN architecture AlexNet yielded an impressive result of 99.56% validation accuracy and has a validation loss of 0.0201%.\",\"PeriodicalId\":187055,\"journal\":{\"name\":\"2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/COINS54846.2022.9854984\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COINS54846.2022.9854984","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Deep Learning-Based Smart Assistive Framework for Visually Impaired People
According to the World Health Organization (WHO), there are millions of visually impaired people in the world who face a lot of difficulties in moving independently. They always need help from people with normal sight. The capability to find their way to their intended destination in an unseen place is a major challenge for visually impaired people. This paper aimed to assist these individuals in resolving their problems with moving to any place on their own. To this end, we developed an intelligent system for visually impaired people using a deep learning (DL) algorithm, i.e., convolutional neural network (CNN) architecture, AlexNet, to recognize the situation and scene objects automatically in real-time. The proposed system consists of a Raspberry Pi, ultrasonic sensors, a camera, breadboards, jumper wires, a buzzer, and headphones. Breadboards are used to connect the sensors with the help of a Raspberry Pi and jumper wires. The sensors are used for the detection of obstacles and potholes, while the camera performs as a virtual eye for the visually impaired people by recognizing these obstacles in any direction (front, left, and right). The proposed system provides information about objects to a blind person. The system automatically calculates the distance between the blind person and the obstacle that how far he/she is from the obstacle. Furthermore, a voice message alerts the blind person about the obstacle and directs him/her via earphones. The obtained experimental results show that the utilized CNN architecture AlexNet yielded an impressive result of 99.56% validation accuracy and has a validation loss of 0.0201%.