Training Indoor and Scene-Specific Semantic Segmentation Models to Assist Blind and Low Vision Users in Activities of Daily Living

IF 2.9 Q3 ENGINEERING, BIOMEDICAL
Ruijie Sun;Giles Hamilton-Fletcher;Sahil Faizal;Chen Feng;Todd E. Hudson;John-Ross Rizzo;Kevin C. Chan
{"title":"Training Indoor and Scene-Specific Semantic Segmentation Models to Assist Blind and Low Vision Users in Activities of Daily Living","authors":"Ruijie Sun;Giles Hamilton-Fletcher;Sahil Faizal;Chen Feng;Todd E. Hudson;John-Ross Rizzo;Kevin C. Chan","doi":"10.1109/OJEMB.2025.3607816","DOIUrl":null,"url":null,"abstract":"<italic>Goal:</i> Persons with blindness or low vision (pBLV) face challenges in completing activities of daily living (ADLs/IADLs). Semantic segmentation techniques on smartphones, like DeepLabV3+, can quickly assist in identifying key objects, but their performance across different indoor settings and lighting conditions remains unclear. <italic>Methods:</i> Using the MIT ADE20K SceneParse150 dataset, we trained and evaluated AI models for specific indoor scenes (kitchen, bedroom, bathroom, living room) and compared them with a generic indoor model. Performance was assessed using mean accuracy and intersection-over-union metrics. <italic>Results:</i> Scene-specific models outperformed the generic model, particularly in identifying ADL/IADL objects. Models focusing on rooms with more unique objects showed the greatest improvements (bedroom, bathroom). Scene-specific models were also more resilient to low-light conditions. <italic>Conclusions:</i> These findings highlight how using scene-specific models can boost key performance indicators for assisting pBLV across different functional environments. We suggest that a dynamic selection of the best-performing models on mobile technologies may better facilitate ADLs/IADLs for pBLV.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"533-539"},"PeriodicalIF":2.9000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11153825","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Engineering in Medicine and Biology","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11153825/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Goal: Persons with blindness or low vision (pBLV) face challenges in completing activities of daily living (ADLs/IADLs). Semantic segmentation techniques on smartphones, like DeepLabV3+, can quickly assist in identifying key objects, but their performance across different indoor settings and lighting conditions remains unclear. Methods: Using the MIT ADE20K SceneParse150 dataset, we trained and evaluated AI models for specific indoor scenes (kitchen, bedroom, bathroom, living room) and compared them with a generic indoor model. Performance was assessed using mean accuracy and intersection-over-union metrics. Results: Scene-specific models outperformed the generic model, particularly in identifying ADL/IADL objects. Models focusing on rooms with more unique objects showed the greatest improvements (bedroom, bathroom). Scene-specific models were also more resilient to low-light conditions. Conclusions: These findings highlight how using scene-specific models can boost key performance indicators for assisting pBLV across different functional environments. We suggest that a dynamic selection of the best-performing models on mobile technologies may better facilitate ADLs/IADLs for pBLV.
训练室内和场景特定的语义分割模型,以帮助盲人和低视力用户进行日常生活活动
目标:失明或低视力(pBLV)的人在完成日常生活活动(ADLs/IADLs)方面面临挑战。智能手机上的语义分割技术,如DeepLabV3+,可以快速帮助识别关键物体,但它们在不同室内环境和照明条件下的性能仍不清楚。方法:使用MIT ADE20K SceneParse150数据集,我们对特定室内场景(厨房、卧室、浴室、客厅)的AI模型进行训练和评估,并将其与通用室内模型进行比较。性能评估使用平均精度和交叉超过联合指标。结果:场景特定模型优于通用模型,特别是在识别ADL/IADL对象方面。专注于拥有更多独特物品的房间的模型显示出最大的改进(卧室、浴室)。特定场景的模型也更能适应弱光条件。结论:这些发现强调了使用场景特定模型如何提高关键性能指标,以帮助pBLV跨越不同的功能环境。我们建议在移动技术上动态选择性能最佳的模型可以更好地促进pBLV的adl / iadl。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
9.50
自引率
3.40%
发文量
20
审稿时长
10 weeks
期刊介绍: The IEEE Open Journal of Engineering in Medicine and Biology (IEEE OJEMB) is dedicated to serving the community of innovators in medicine, technology, and the sciences, with the core goal of advancing the highest-quality interdisciplinary research between these disciplines. The journal firmly believes that the future of medicine depends on close collaboration between biology and technology, and that fostering interaction between these fields is an important way to advance key discoveries that can improve clinical care.IEEE OJEMB is a gold open access journal in which the authors retain the copyright to their papers and readers have free access to the full text and PDFs on the IEEE Xplore® Digital Library. However, authors are required to pay an article processing fee at the time their paper is accepted for publication, using to cover the cost of publication.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信