Fixed DO Solfège based object detection and positional analysis for the visually impaired

Siddharth Kalra, Sarika Jain, Amit Agarwal
{"title":"Fixed DO Solfège based object detection and positional analysis for the visually impaired","authors":"Siddharth Kalra, Sarika Jain, Amit Agarwal","doi":"10.1109/ICRITO.2017.8342497","DOIUrl":null,"url":null,"abstract":"This paper proposes a novel approach towards object detection and its subsequent positional analysis for a visually impaired subject, using Fixed DO Solfège, basically sounds that we have heard multiple times, and we are aware of them and also their sequence is thus naturally perceived by us viz. DO-RE-MI-FA-SO-LA-TI This approach utilizes a concept of virtual zones superimposed on the viewport of a small wearable CMOS camera module mounted on the eyeglasses of the subject, communicating to a computing device with attached earplugs for sonic feedbacks in the Fixed DO Solfège notation. By implantation of a HAAR cascade based classifier trained system, several need to know objects are trained and fed into the recognition pool, which are thus detected in the viewport overlaid by the virtual harmonic zones further linked to the Fixed DO Solfège notations, and are mapped throughout the viewport in an incremental/sequential manner. As the subject moves his hand through the viewport, and his hands overlap the recognized objects, an audible beep basis on the sound of the zone is played on the earplugs. This not only enables the subject to know that in which direction a particular object is situated, but also, because of the sound, he can also know as to how many objects lie before/after the current object, this gives a sense of relative recognition and positional cognition of the objects.","PeriodicalId":357118,"journal":{"name":"2017 6th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 6th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRITO.2017.8342497","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

This paper proposes a novel approach towards object detection and its subsequent positional analysis for a visually impaired subject, using Fixed DO Solfège, basically sounds that we have heard multiple times, and we are aware of them and also their sequence is thus naturally perceived by us viz. DO-RE-MI-FA-SO-LA-TI This approach utilizes a concept of virtual zones superimposed on the viewport of a small wearable CMOS camera module mounted on the eyeglasses of the subject, communicating to a computing device with attached earplugs for sonic feedbacks in the Fixed DO Solfège notation. By implantation of a HAAR cascade based classifier trained system, several need to know objects are trained and fed into the recognition pool, which are thus detected in the viewport overlaid by the virtual harmonic zones further linked to the Fixed DO Solfège notations, and are mapped throughout the viewport in an incremental/sequential manner. As the subject moves his hand through the viewport, and his hands overlap the recognized objects, an audible beep basis on the sound of the zone is played on the earplugs. This not only enables the subject to know that in which direction a particular object is situated, but also, because of the sound, he can also know as to how many objects lie before/after the current object, this gives a sense of relative recognition and positional cognition of the objects.
修复了基于DO solf的视障对象检测和位置分析
本文提出一种新颖的方法对目标检测和随后的位置为视障主题分析,使用固定做视唱练习,基本上听起来,我们听过很多次了,我们也意识到了他们,因此,他们的顺序是自然被我们即DO-RE-MI-FA-SO-LA-TI这种方法利用视窗上的虚拟区域重叠的概念的可穿戴CMOS相机模块安装在主体的眼镜,与带有耳塞的计算设备通信,以获得固定DO solf符号中的声音反馈。通过植入基于HAAR级联的分类器训练系统,几个需要知道的对象被训练并输入到识别池中,从而在视口中检测到由虚拟谐波区覆盖的对象,这些虚拟谐波区进一步与固定DO solf符号相连,并以增量/顺序的方式映射到整个视口。当对象移动他的手通过视口,他的手重叠识别的对象,一个可听到的蜂鸣声的基础上,该区域的声音在耳塞上播放。这不仅使主体知道一个特定的物体位于哪个方向,而且由于声音,他还可以知道在当前物体之前/之后有多少物体,这给了物体的相对识别和位置认知感。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信