Vikrant Gurav, Abhinav Parameshwaran, Kevin Sherla
{"title":"Mobility Assistance for Visually Impaired Using LiDAR","authors":"Vikrant Gurav, Abhinav Parameshwaran, Kevin Sherla","doi":"10.1109/CSDE53843.2021.9744605","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a mobile application which would mainly be a substitute for walking canes for the visually impaired. Using LiDAR (Light Detection and Ranging), a 3D model of the scanned environment would be constructed in real time. The user, through haptic feedback, can be aware if an obstacle exists in his view. The frequency of this haptic feedback would be inversely proportional to the distance from the obstacle. Furthermore, using a CNN model, which has an input of the spatial features and depth features of the environment, the user can identify the type of obstacle that exists in their view through synthesized speech.","PeriodicalId":166950,"journal":{"name":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSDE53843.2021.9744605","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we propose a mobile application which would mainly be a substitute for walking canes for the visually impaired. Using LiDAR (Light Detection and Ranging), a 3D model of the scanned environment would be constructed in real time. The user, through haptic feedback, can be aware if an obstacle exists in his view. The frequency of this haptic feedback would be inversely proportional to the distance from the obstacle. Furthermore, using a CNN model, which has an input of the spatial features and depth features of the environment, the user can identify the type of obstacle that exists in their view through synthesized speech.