Galit Buchs, S. Maidenbaum, A. Amedi, S. Levy-Tzedek
{"title":"实际上是用感官代替盲人用户放大","authors":"Galit Buchs, S. Maidenbaum, A. Amedi, S. Levy-Tzedek","doi":"10.1109/ICVR.2015.7358613","DOIUrl":null,"url":null,"abstract":"When perceiving a scene visually we constantly move our eyes and focus on particular details, which we integrate into a coherent percept. Can blind individuals integrate visual information this way? Can they even conceptualize zooming-in on sub-parts of visual images? We explore this question virtually using the EyeMusic Sensory Substitution Device (SSD). SSDs transfer information usually received by one sense via another, here `seeing' with sound. This question is especially important for SSD users since SSDs typically down-sample the visual stimuli into low-resolution images in which zooming-in to sub-parts could significantly improve users' perception. Five blind participants used the EyeMusic with a zoom-mechanism in a virtual environment to identify cartoon figures. Using a touchscreen they could zoom into different parts of the image, identify individual facial features and integrate them into a full facial representation. These findings show that indeed such integration of visual information is possible even for users who are blind from birth and demonstrates the approach's potential for practical visual rehabilitation.","PeriodicalId":194703,"journal":{"name":"2015 International Conference on Virtual Rehabilitation (ICVR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Virtually zooming-in with sensory substitution for blind users\",\"authors\":\"Galit Buchs, S. Maidenbaum, A. Amedi, S. Levy-Tzedek\",\"doi\":\"10.1109/ICVR.2015.7358613\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"When perceiving a scene visually we constantly move our eyes and focus on particular details, which we integrate into a coherent percept. Can blind individuals integrate visual information this way? Can they even conceptualize zooming-in on sub-parts of visual images? We explore this question virtually using the EyeMusic Sensory Substitution Device (SSD). SSDs transfer information usually received by one sense via another, here `seeing' with sound. This question is especially important for SSD users since SSDs typically down-sample the visual stimuli into low-resolution images in which zooming-in to sub-parts could significantly improve users' perception. Five blind participants used the EyeMusic with a zoom-mechanism in a virtual environment to identify cartoon figures. Using a touchscreen they could zoom into different parts of the image, identify individual facial features and integrate them into a full facial representation. These findings show that indeed such integration of visual information is possible even for users who are blind from birth and demonstrates the approach's potential for practical visual rehabilitation.\",\"PeriodicalId\":194703,\"journal\":{\"name\":\"2015 International Conference on Virtual Rehabilitation (ICVR)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-06-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Conference on Virtual Rehabilitation (ICVR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICVR.2015.7358613\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Virtual Rehabilitation (ICVR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICVR.2015.7358613","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Virtually zooming-in with sensory substitution for blind users
When perceiving a scene visually we constantly move our eyes and focus on particular details, which we integrate into a coherent percept. Can blind individuals integrate visual information this way? Can they even conceptualize zooming-in on sub-parts of visual images? We explore this question virtually using the EyeMusic Sensory Substitution Device (SSD). SSDs transfer information usually received by one sense via another, here `seeing' with sound. This question is especially important for SSD users since SSDs typically down-sample the visual stimuli into low-resolution images in which zooming-in to sub-parts could significantly improve users' perception. Five blind participants used the EyeMusic with a zoom-mechanism in a virtual environment to identify cartoon figures. Using a touchscreen they could zoom into different parts of the image, identify individual facial features and integrate them into a full facial representation. These findings show that indeed such integration of visual information is possible even for users who are blind from birth and demonstrates the approach's potential for practical visual rehabilitation.