Kailun Yang, L. Bergasa, Eduardo Romera, Dongming Sun, Kaiwei Wang, R. Barea
{"title":"Semantic perception of curbs beyond traversability for real-world navigation assistance systems","authors":"Kailun Yang, L. Bergasa, Eduardo Romera, Dongming Sun, Kaiwei Wang, R. Barea","doi":"10.1109/ICVES.2018.8519526","DOIUrl":null,"url":null,"abstract":"Intelligent Vehicles (IV) and navigational assistance for the Visually Impaired (VI) are becoming highly coupled, both fulfilling safety-critical tasks towards the utopia of all traffic participants. In this paper, the main purpose is to leverage recently emerged methods for self-driving technology, and transfer them to augment perception and aid navigation in ambient assisted living. More precisely, we put forward to seize pixel-wise semantic segmentation to support curb negotiation and traversability awareness, along the pathway of visually impaired individuals. At the crux of our perception unification framework is an effort to attain efficient understanding by proposing a deep architecture built on residual factorized convolution and pyramidical representation. A comprehensive set of experiments demonstrates the accurate scene parsing results with promise of real-time inference speed. Crucially, real-world performance over state-of-art approaches qualifies the proposed framework for assistance when deployed to two wearable navigation systems, including a pair of commercial smart glasses and a prototype of customized device.","PeriodicalId":203807,"journal":{"name":"2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICVES.2018.8519526","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Intelligent Vehicles (IV) and navigational assistance for the Visually Impaired (VI) are becoming highly coupled, both fulfilling safety-critical tasks towards the utopia of all traffic participants. In this paper, the main purpose is to leverage recently emerged methods for self-driving technology, and transfer them to augment perception and aid navigation in ambient assisted living. More precisely, we put forward to seize pixel-wise semantic segmentation to support curb negotiation and traversability awareness, along the pathway of visually impaired individuals. At the crux of our perception unification framework is an effort to attain efficient understanding by proposing a deep architecture built on residual factorized convolution and pyramidical representation. A comprehensive set of experiments demonstrates the accurate scene parsing results with promise of real-time inference speed. Crucially, real-world performance over state-of-art approaches qualifies the proposed framework for assistance when deployed to two wearable navigation systems, including a pair of commercial smart glasses and a prototype of customized device.