Indoor Navigation Assistance System for Visually Impaired with Semantic Segmentation using EdgeTPU

Victor Tran, K. Sood, Kayhan Bakian, Aneesh Reddy Sannapu
{"title":"Indoor Navigation Assistance System for Visually Impaired with Semantic Segmentation using EdgeTPU","authors":"Victor Tran, K. Sood, Kayhan Bakian, Aneesh Reddy Sannapu","doi":"10.1109/SmartNets58706.2023.10215643","DOIUrl":null,"url":null,"abstract":"Recognizing the limited supply and high cost of alternative solutions for visual assistance, such as guides, it is critical to create an affordable and accessible means of recognizing walkable paths for visually impaired individuals. In this paper, we assess deep learning and traditional machine learning models for image segmentation and object detection to identify walkable, indoor paths in real-time, given low-resolution images from a camera. Specifically, we leverage the processing capabilities of Google’s EdgeTPU chip, which accelerates inferences of light-weight TensorFlow models deployed in embedded devices. We retrain the MobileNet computer vision model on the ADE MIT Scene Parsing Benchmark Dataset, and improve the performance accuracy by consolidating 150 categories into just two categories. The segmentation model is post-quantize-aware trained and co-compiled with a light-weight object detection model. The resulting model is capable of simultaneous semantic segmentation and object detection with inference times of 65 milliseconds. Our approach lays the foundation for transforming the level of assistance for the visually impaired to sense the world through a wearable device assistance system for indoor navigation.","PeriodicalId":301834,"journal":{"name":"2023 International Conference on Smart Applications, Communications and Networking (SmartNets)","volume":"185 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Smart Applications, Communications and Networking (SmartNets)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartNets58706.2023.10215643","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recognizing the limited supply and high cost of alternative solutions for visual assistance, such as guides, it is critical to create an affordable and accessible means of recognizing walkable paths for visually impaired individuals. In this paper, we assess deep learning and traditional machine learning models for image segmentation and object detection to identify walkable, indoor paths in real-time, given low-resolution images from a camera. Specifically, we leverage the processing capabilities of Google’s EdgeTPU chip, which accelerates inferences of light-weight TensorFlow models deployed in embedded devices. We retrain the MobileNet computer vision model on the ADE MIT Scene Parsing Benchmark Dataset, and improve the performance accuracy by consolidating 150 categories into just two categories. The segmentation model is post-quantize-aware trained and co-compiled with a light-weight object detection model. The resulting model is capable of simultaneous semantic segmentation and object detection with inference times of 65 milliseconds. Our approach lays the foundation for transforming the level of assistance for the visually impaired to sense the world through a wearable device assistance system for indoor navigation.
基于EdgeTPU语义分割的视障室内导航辅助系统
认识到诸如向导之类的视觉辅助替代解决方案的供应有限且成本高昂,因此为视障人士创造一种负担得起且易于使用的识别路径至关重要。在本文中,我们评估了用于图像分割和目标检测的深度学习和传统机器学习模型,以实时识别来自相机的低分辨率图像的可行走的室内路径。具体来说,我们利用b谷歌的EdgeTPU芯片的处理能力,它可以加速部署在嵌入式设备中的轻量级TensorFlow模型的推理。我们在ADE MIT场景解析基准数据集上重新训练MobileNet计算机视觉模型,并通过将150个类别合并为两个类别来提高性能准确性。该分割模型经过后量化感知训练,并与轻量级目标检测模型共同编译。该模型能够同时进行语义分割和目标检测,推理时间为65毫秒。我们的方法为通过室内导航的可穿戴设备辅助系统改变视障人士感知世界的帮助水平奠定了基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信