{"title":"利用深度学习、计算机视觉和自然语言处理技术,为视障人士提供辅助技术","authors":"Rashi Agarwal, Vineet Jaruhar","doi":"10.51767/jc1311","DOIUrl":null,"url":null,"abstract":"Visual impairment can cause difficulties in performing daily activities. Unfamiliarity of places, hindrance in known routes, water puddles, pot holes, stray animals, sudden road incidents etc. decreases the confidence and interaction of the Visually Impaired People (VIP) leading to human assistance. We are presenting a human-like autonomous assistant which is in the form of a stick made with technologies such as Computer Vision, NLP, GPS, Deep Learning and Embedded Systems. VIP can communicate with the stick through Chatbot. He can get answers to general questions like weather status, time of the day, what is my location, where is point A, etc. and specific questions like what is in front of me, in which room I am, provide me the navigation from point A to point B etc.. In situations like an obstacle in the path, pot holes, ascending/descending stairs, approaching people etc., our stick will notify the VIP and will give him the clear path also. The stick will come along with a smart phone based application which will help in keeping a track of the location which can be shared with anyone. We have used CNN to make an object detection model which can classify 20 different objects. A day and night classifier is also included to correctly figure out the time of the day. The stick is equipped with a camera, ultrasonic and wet sensors, GPS module with controller. Real time video through camera will be captured and processed by the controller. Then this will evaluate the feed through object detection model. The outcome can be notified to the VIP through Chatbot. This will enable safety, security, control and more independence to the VIPs.","PeriodicalId":408370,"journal":{"name":"BSSS Journal of Computer","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"IMPLEMENTING ASSISTIVE TECHNOLOGY FOR THE VISUALLY IMPAIRED USING DEEP LEARNING, COMPUTER VISION AND NLP\",\"authors\":\"Rashi Agarwal, Vineet Jaruhar\",\"doi\":\"10.51767/jc1311\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual impairment can cause difficulties in performing daily activities. Unfamiliarity of places, hindrance in known routes, water puddles, pot holes, stray animals, sudden road incidents etc. decreases the confidence and interaction of the Visually Impaired People (VIP) leading to human assistance. We are presenting a human-like autonomous assistant which is in the form of a stick made with technologies such as Computer Vision, NLP, GPS, Deep Learning and Embedded Systems. VIP can communicate with the stick through Chatbot. He can get answers to general questions like weather status, time of the day, what is my location, where is point A, etc. and specific questions like what is in front of me, in which room I am, provide me the navigation from point A to point B etc.. In situations like an obstacle in the path, pot holes, ascending/descending stairs, approaching people etc., our stick will notify the VIP and will give him the clear path also. The stick will come along with a smart phone based application which will help in keeping a track of the location which can be shared with anyone. We have used CNN to make an object detection model which can classify 20 different objects. A day and night classifier is also included to correctly figure out the time of the day. The stick is equipped with a camera, ultrasonic and wet sensors, GPS module with controller. Real time video through camera will be captured and processed by the controller. Then this will evaluate the feed through object detection model. The outcome can be notified to the VIP through Chatbot. This will enable safety, security, control and more independence to the VIPs.\",\"PeriodicalId\":408370,\"journal\":{\"name\":\"BSSS Journal of Computer\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BSSS Journal of Computer\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.51767/jc1311\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BSSS Journal of Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.51767/jc1311","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
IMPLEMENTING ASSISTIVE TECHNOLOGY FOR THE VISUALLY IMPAIRED USING DEEP LEARNING, COMPUTER VISION AND NLP
Visual impairment can cause difficulties in performing daily activities. Unfamiliarity of places, hindrance in known routes, water puddles, pot holes, stray animals, sudden road incidents etc. decreases the confidence and interaction of the Visually Impaired People (VIP) leading to human assistance. We are presenting a human-like autonomous assistant which is in the form of a stick made with technologies such as Computer Vision, NLP, GPS, Deep Learning and Embedded Systems. VIP can communicate with the stick through Chatbot. He can get answers to general questions like weather status, time of the day, what is my location, where is point A, etc. and specific questions like what is in front of me, in which room I am, provide me the navigation from point A to point B etc.. In situations like an obstacle in the path, pot holes, ascending/descending stairs, approaching people etc., our stick will notify the VIP and will give him the clear path also. The stick will come along with a smart phone based application which will help in keeping a track of the location which can be shared with anyone. We have used CNN to make an object detection model which can classify 20 different objects. A day and night classifier is also included to correctly figure out the time of the day. The stick is equipped with a camera, ultrasonic and wet sensors, GPS module with controller. Real time video through camera will be captured and processed by the controller. Then this will evaluate the feed through object detection model. The outcome can be notified to the VIP through Chatbot. This will enable safety, security, control and more independence to the VIPs.