{"title":"A deep learning model to assist visually impaired in pothole detection using computer vision","authors":"Arjun Paramarthalingam , Jegan Sivaraman , Prasannavenkatesan Theerthagiri , Balaji Vijayakumar , Vignesh Baskaran","doi":"10.1016/j.dajour.2024.100507","DOIUrl":null,"url":null,"abstract":"<div><p>Visually impaired individuals encounter numerous impediments when traveling, such as navigating unfamiliar routes, accessing information, and transportation, which can limit their mobility and restrict their access to opportunities. However, assistive technologies and infrastructure solutions such as tactile paving, audio cues, voice announcements, and smartphone applications have been developed to mitigate these challenges. Visually impaired individuals also face difficulties when encountering potholes while traveling. Potholes can pose a significant safety hazard, as they can cause individuals to trip and fall, potentially leading to injury. For visually impaired individuals, identifying and avoiding potholes can be particularly challenging. The solutions ensure that all individuals can travel safely and independently, regardless of their visual abilities. An innovative approach that leverages the You Only Look Once (YOLO) algorithm to detect potholes and provide auditory or haptic feedback to visually impaired individuals has been proposed in this paper. The dataset of pothole images was trained and integrated into an application for detecting potholes in real-time image data using a camera. The app provides feedback to the user, allowing them to navigate potholes and increasing their mobility and safety. This approach highlights the potential of YOLO for pothole detection and provides a valuable tool for visually impaired individuals. According to the testing, the model achieved 82.7% image accuracy and 30 Frames Per Second (FPS) accuracy in live video. The model is trained to detect potholes close to the user, but it may be hard to detect potholes far away from the user. The current model is only trained to detect potholes, but visually impaired people face other challenges. The proposed technology is a portable option for visually impaired people.</p></div>","PeriodicalId":100357,"journal":{"name":"Decision Analytics Journal","volume":"12 ","pages":"Article 100507"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772662224001115/pdfft?md5=6ba37c16b7b3913b63959c8ebb277ada&pid=1-s2.0-S2772662224001115-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Decision Analytics Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772662224001115","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Visually impaired individuals encounter numerous impediments when traveling, such as navigating unfamiliar routes, accessing information, and transportation, which can limit their mobility and restrict their access to opportunities. However, assistive technologies and infrastructure solutions such as tactile paving, audio cues, voice announcements, and smartphone applications have been developed to mitigate these challenges. Visually impaired individuals also face difficulties when encountering potholes while traveling. Potholes can pose a significant safety hazard, as they can cause individuals to trip and fall, potentially leading to injury. For visually impaired individuals, identifying and avoiding potholes can be particularly challenging. The solutions ensure that all individuals can travel safely and independently, regardless of their visual abilities. An innovative approach that leverages the You Only Look Once (YOLO) algorithm to detect potholes and provide auditory or haptic feedback to visually impaired individuals has been proposed in this paper. The dataset of pothole images was trained and integrated into an application for detecting potholes in real-time image data using a camera. The app provides feedback to the user, allowing them to navigate potholes and increasing their mobility and safety. This approach highlights the potential of YOLO for pothole detection and provides a valuable tool for visually impaired individuals. According to the testing, the model achieved 82.7% image accuracy and 30 Frames Per Second (FPS) accuracy in live video. The model is trained to detect potholes close to the user, but it may be hard to detect potholes far away from the user. The current model is only trained to detect potholes, but visually impaired people face other challenges. The proposed technology is a portable option for visually impaired people.