图像与听觉刺激的交叉模态编码:一种视觉受损的语言

Takahisa Kishino, Sun Zhe, Roberto Marchisio, R. Micheletto
{"title":"图像与听觉刺激的交叉模态编码:一种视觉受损的语言","authors":"Takahisa Kishino, Sun Zhe, Roberto Marchisio, R. Micheletto","doi":"10.1167/17.10.1356","DOIUrl":null,"url":null,"abstract":"In this study we describe a methodology to realize visual images cognition in the broader sense, by a cross-modal stimulation through the auditory channel. An original algorithm of conversion from bi-dimensional images to sounds has been established and tested on several subjects. Our results show that subjects where able to discriminate with a precision of 95\\% different sounds corresponding to different test geometric shapes. Moreover, after brief learning sessions on simple images, subjects where able to recognize among a group of 16 complex and never-trained images a single target by hearing its acoustical counterpart. Rate of recognition was found to depend on image characteristics, in 90% of the cases, subjects did better than choosing at random. This study contribute to the understanding of cross-modal perception and help for the realization of systems that use acoustical signals to help visually impaired persons to recognize objects and improve navigation","PeriodicalId":298664,"journal":{"name":"arXiv: Neurons and Cognition","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Cross-modal codification of images with auditory stimuli: a language for the visually impaired\",\"authors\":\"Takahisa Kishino, Sun Zhe, Roberto Marchisio, R. Micheletto\",\"doi\":\"10.1167/17.10.1356\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this study we describe a methodology to realize visual images cognition in the broader sense, by a cross-modal stimulation through the auditory channel. An original algorithm of conversion from bi-dimensional images to sounds has been established and tested on several subjects. Our results show that subjects where able to discriminate with a precision of 95\\\\% different sounds corresponding to different test geometric shapes. Moreover, after brief learning sessions on simple images, subjects where able to recognize among a group of 16 complex and never-trained images a single target by hearing its acoustical counterpart. Rate of recognition was found to depend on image characteristics, in 90% of the cases, subjects did better than choosing at random. This study contribute to the understanding of cross-modal perception and help for the realization of systems that use acoustical signals to help visually impaired persons to recognize objects and improve navigation\",\"PeriodicalId\":298664,\"journal\":{\"name\":\"arXiv: Neurons and Cognition\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-05-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv: Neurons and Cognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1167/17.10.1356\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv: Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1167/17.10.1356","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在本研究中,我们描述了一种通过听觉通道的跨模态刺激来实现更广泛意义上的视觉图像认知的方法。建立了一种从二维图像到声音的原始算法,并在几个主题上进行了测试。我们的研究结果表明,受试者能够以95%的精确度区分不同的声音对应于不同的测试几何形状。此外,在对简单图像进行短暂的学习后,受试者能够通过听到其声学对应的声音,在一组16张复杂且未经训练的图像中识别出单个目标。识别率取决于图像特征,在90%的情况下,受试者比随机选择更好。本研究有助于理解跨模态感知,并有助于实现利用声信号帮助视障人士识别物体和改善导航的系统
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cross-modal codification of images with auditory stimuli: a language for the visually impaired
In this study we describe a methodology to realize visual images cognition in the broader sense, by a cross-modal stimulation through the auditory channel. An original algorithm of conversion from bi-dimensional images to sounds has been established and tested on several subjects. Our results show that subjects where able to discriminate with a precision of 95\% different sounds corresponding to different test geometric shapes. Moreover, after brief learning sessions on simple images, subjects where able to recognize among a group of 16 complex and never-trained images a single target by hearing its acoustical counterpart. Rate of recognition was found to depend on image characteristics, in 90% of the cases, subjects did better than choosing at random. This study contribute to the understanding of cross-modal perception and help for the realization of systems that use acoustical signals to help visually impaired persons to recognize objects and improve navigation
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信