{"title":"Spatial codes for movement coordination do not depend on developmental vision","authors":"T. Heed, B. Roeder","doi":"10.1163/187847612X646721","DOIUrl":null,"url":null,"abstract":"When people make oscillating right–left movements with their two index fingers while holding their hands palms down, they find it easier to move the fingers symmetrically (i.e., both fingers towards the middle, then both fingers to the outside) than parallel (i.e., both fingers towards the left, then both fingers towards the right). It was originally proposed that this effect is due to concurrent activation of homologous muscles in the two hands. However, symmetric movements are also easier when one of the hands is turned palm up, thus requiring concurrent use of opposing rather than homologous muscles. This was interpreted to indicate that movement coordination relies on perceptual rather than muscle-based information (Mechsner et al., 2001). The current experiment tested whether the spatial code used in this task depends on vision. Participants made either symmetrical or parallel right–left movements with their two index fingers while their palms were either both facing down, both facing up, or one facing up and one down. Neither in sighted nor in congenitally blind participants did movement execution depend on hand posture. Rather, both groups were always more efficient when making symmetrical rather than parallel movements with respect to external space. We conclude that the spatial code used for movement coordination does not crucially depend on vision. Furthermore, whereas congenitally blind people predominately use body-based (somatotopic) spatial coding in perceptual tasks (Roder et al., 2007), they use external spatial codes in movement tasks, with performance indistinguishable from the sighted.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"51-51"},"PeriodicalIF":0.0000,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646721","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Seeing and Perceiving","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1163/187847612X646721","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
When people make oscillating right–left movements with their two index fingers while holding their hands palms down, they find it easier to move the fingers symmetrically (i.e., both fingers towards the middle, then both fingers to the outside) than parallel (i.e., both fingers towards the left, then both fingers towards the right). It was originally proposed that this effect is due to concurrent activation of homologous muscles in the two hands. However, symmetric movements are also easier when one of the hands is turned palm up, thus requiring concurrent use of opposing rather than homologous muscles. This was interpreted to indicate that movement coordination relies on perceptual rather than muscle-based information (Mechsner et al., 2001). The current experiment tested whether the spatial code used in this task depends on vision. Participants made either symmetrical or parallel right–left movements with their two index fingers while their palms were either both facing down, both facing up, or one facing up and one down. Neither in sighted nor in congenitally blind participants did movement execution depend on hand posture. Rather, both groups were always more efficient when making symmetrical rather than parallel movements with respect to external space. We conclude that the spatial code used for movement coordination does not crucially depend on vision. Furthermore, whereas congenitally blind people predominately use body-based (somatotopic) spatial coding in perceptual tasks (Roder et al., 2007), they use external spatial codes in movement tasks, with performance indistinguishable from the sighted.
当人们在掌心向下的情况下用两个食指左右摆动时,他们发现手指对称移动(即两个手指向中,然后两个手指向外)比平行移动(即两个手指向左,然后两个手指向右)更容易。最初提出这种效应是由于两只手的同源肌肉同时激活。然而,当一只手掌心朝上时,对称运动也更容易,因此需要同时使用相反的肌肉而不是同源的肌肉。这被解释为表明运动协调依赖于感知而不是基于肌肉的信息(Mechsner et al., 2001)。本实验测试了该任务中使用的空间编码是否依赖于视觉。参与者用他们的两个食指做对称或平行的左右运动,而他们的手掌要么都朝下,要么都朝上,要么一个朝上,一个朝下。无论是视力正常的参与者还是先天失明的参与者,他们的动作执行都不依赖于手的姿势。相反,两组人在对外部空间进行对称而不是平行运动时总是更有效率。我们得出的结论是,用于运动协调的空间代码并不完全取决于视觉。此外,先天失明的人在感知任务中主要使用基于身体的(体位)空间编码(Roder等,2007),而他们在运动任务中使用外部空间编码,其表现与视力正常的人无异。