{"title":"我们如何从稀疏的动觉接触中编码空间布局?","authors":"R. Klatzky, S. Lederman","doi":"10.1109/HAPTIC.2003.1191269","DOIUrl":null,"url":null,"abstract":"We investigated people's ability to report the shape and scale of a spatial layout after sparse contact, without vision. We propose that the initial representation of sparsely contacted layout is kinesthetic. From this can be computed a configural representation that supports reports of shape and scale, but at the cost of increased error. In four experiments, participants' fingers were guided to a two-point layout, after which they returned to the points or reported distance and/or angle, subject to a change in location and sometimes a rotation as well. Errors in reproducing inter-point distance, i.e., the scale of the layout, were smallest for the task of returning to the touched points and nearly twice as great when distance was reported at a new location. Errors in reproducing inter-point angle, i.e., the shape of the layout, were smallest for the task of returning to the touched points and nearly twice as great when angle was reported subject to rotation. The data highlight limitations on reporting the shape and scale of a haptically rendered layout after sparse contact.","PeriodicalId":177962,"journal":{"name":"11th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2003. HAPTICS 2003. Proceedings.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"How well can we encode spatial layout from sparse kinesthetic contact?\",\"authors\":\"R. Klatzky, S. Lederman\",\"doi\":\"10.1109/HAPTIC.2003.1191269\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We investigated people's ability to report the shape and scale of a spatial layout after sparse contact, without vision. We propose that the initial representation of sparsely contacted layout is kinesthetic. From this can be computed a configural representation that supports reports of shape and scale, but at the cost of increased error. In four experiments, participants' fingers were guided to a two-point layout, after which they returned to the points or reported distance and/or angle, subject to a change in location and sometimes a rotation as well. Errors in reproducing inter-point distance, i.e., the scale of the layout, were smallest for the task of returning to the touched points and nearly twice as great when distance was reported at a new location. Errors in reproducing inter-point angle, i.e., the shape of the layout, were smallest for the task of returning to the touched points and nearly twice as great when angle was reported subject to rotation. The data highlight limitations on reporting the shape and scale of a haptically rendered layout after sparse contact.\",\"PeriodicalId\":177962,\"journal\":{\"name\":\"11th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2003. HAPTICS 2003. Proceedings.\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2003-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"11th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2003. HAPTICS 2003. Proceedings.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HAPTIC.2003.1191269\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"11th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2003. HAPTICS 2003. Proceedings.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HAPTIC.2003.1191269","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
How well can we encode spatial layout from sparse kinesthetic contact?
We investigated people's ability to report the shape and scale of a spatial layout after sparse contact, without vision. We propose that the initial representation of sparsely contacted layout is kinesthetic. From this can be computed a configural representation that supports reports of shape and scale, but at the cost of increased error. In four experiments, participants' fingers were guided to a two-point layout, after which they returned to the points or reported distance and/or angle, subject to a change in location and sometimes a rotation as well. Errors in reproducing inter-point distance, i.e., the scale of the layout, were smallest for the task of returning to the touched points and nearly twice as great when distance was reported at a new location. Errors in reproducing inter-point angle, i.e., the shape of the layout, were smallest for the task of returning to the touched points and nearly twice as great when angle was reported subject to rotation. The data highlight limitations on reporting the shape and scale of a haptically rendered layout after sparse contact.