{"title":"Investigating task and modality switching costs using bimodal stimuli","authors":"Rajwant Sandhu, B. Dyson","doi":"10.1163/187847612X646451","DOIUrl":"https://doi.org/10.1163/187847612X646451","url":null,"abstract":"Investigations of concurrent task and modality switching effects have to date been studied under conditions of uni-modal stimulus presentation. As such, it is difficult to directly compare resultant task and modality switching effects, as the stimuli afford both tasks on each trial, but only one modality. The current study investigated task and modality switching using bi-modal stimulus presentation under various cue conditions: task and modality (double cue), either task or modality (single cue) or no cue. Participants responded to either the identity or the position of an audio–visual stimulus. Switching effects were defined as staying within a modality/task (repetition) or switching into a modality/task (change) from trial n − 1 to trial n, with analysis performed on trial n data. While task and modality switching costs were sub-additive across all conditions replicating previous data, modality switching effects were dependent on the modality being attended, and task switching effects were dependent on the task being performed. Specifically, visual responding and position responding revealed significant costs associated with modality and task switching, while auditory responding and identity responding revealed significant gains associated with modality and task switching. The effects interacted further, revealing that costs and gains associated with task and modality switching varying with the specific combination of modality and task type. The current study reconciles previous data by suggesting that efficiently processed modality/task information benefits from repetition while less efficiently processed information benefits from change due to less interference of preferred processing across consecutive trials.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"22-22"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646451","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"4 year olds localize tactile stimuli using an external frame of reference","authors":"Jannath Begum, A. Bremner, Dorothy Cowie","doi":"10.1163/187847612X646631","DOIUrl":"https://doi.org/10.1163/187847612X646631","url":null,"abstract":"Adults show a deficit in their ability to localize tactile stimuli to their hands when their arms are in the less familiar, crossed posture (e.g., Overvliet et al., 2011; Shore et al., 2002). It is thought that this ‘crossed-hands effect’ arises due to conflict (when the hands are crossed) between the anatomical and external frames of reference within which touches can be perceived. Pagel et al. (2009) studied this effect in young children and observed that the crossed-hands effect first emerges after 5.5-years. In their task, children were asked to judge the temporal order of stimuli presented across their hands in quick succession. Here, we present the findings of a simpler task in which children were asked to localize a single vibrotactile stimulus presented to either hand. We also compared the effect of posture under conditions in which children either did, or did not, have visual information about current hand posture. With this method, we observed a crossed-hands effect in the youngest age-group testable; 4-year-olds. We conclude that young children localize tactile stimuli with respect to an external frame of reference from early in childhood or before (cf. Bremner et al., 2008). Additionally, when visual information about posture was made available, 4- to 5-year-olds’ tactile localization accuracy in the uncrossed-hands posture deteriorated and the crossed-hands effect disappeared. We discuss these findings with respect to visual–tactile-proprioceptive integration abilities of young children and examine potential sources of the discrepancies between our findings and those of Pagel et al. (2009).","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"41-41"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646631","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The spatial distribution of auditory attention in early blindness","authors":"Elodie Lerens, L. Renier, A. Volder","doi":"10.1163/187847612X646767","DOIUrl":"https://doi.org/10.1163/187847612X646767","url":null,"abstract":"Early blind people compensate for their lack of vision by developing superior abilities in the remaining senses such as audition (Collignon et al., 2006; Gougoux et al., 2004; Wan et al., 2010). Previous studies reported supra-normal abilities in auditory spatial attention, particularly for the localization of peripheral stimuli in comparison with frontal stimuli (Lessard et al., 1998; Roder et al., 1999). However, it is unknown whether this specific supra-normal ability extends to the non-spatial attention domain. Here we compared the performance of early blind subjects and sighted controls, who were blindfolded, during an auditory non-spatial attention task: target detection among distractors according to tone frequency. We paid a special attention to the potential effect of the sound source location, comparing the accuracy and speed in target detection in the peripheral and frontal space. Blind subjects displayed shorter reaction times than sighted controls for both peripheral and frontal stimuli. Moreover, in the two groups of subjects, we observed an interaction effect between the target location and the distractors location: the target was detected faster when its location was different from the location of the distractors. However, this effect was attenuated in early blind subjects and even cancelled in the condition with frontal targets and peripheral distractors. We conclude that early blind people compensate for the lack of vision by enhancing their ability to process auditory information but also by changing the spatial distribution of their auditory attention resources.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"41 1","pages":"55-55"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646767","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Heterogeneous auditory–visual integration: Effects of pitch, band-width and visual eccentricity","authors":"A. Thelen, M. Murray","doi":"10.1163/187847612X647081","DOIUrl":"https://doi.org/10.1163/187847612X647081","url":null,"abstract":"The identification of monosynaptic connections between primary cortices in non-human primates has recently been complemented by observations of early-latency and low-level non-linear interactions in brain responses in humans as well as observations of facilitative effects of multisensory stimuli on behavior/performance in both humans and monkeys. While there is some evidence in favor of causal links between early–latency interactions within low-level cortices and behavioral facilitation, it remains unknown if such effects are subserved by direct anatomical connections between primary cortices. In non-human primates, the above monosynaptic projections from primary auditory cortex terminate within peripheral visual field representations within primary visual cortex, suggestive of there being a potential bias for the integration of eccentric visual stimuli and pure tone (vs. broad-band) sounds. To date, behavioral effects in humans (and monkeys) have been observed after presenting (para)foveal stimuli with any of a range of auditory stimuli from pure tones to noise bursts. The present study aimed to identify any heterogeneity in the integration of auditory–visual stimuli. To this end, we employed a 3 × 3 within subject design that varied the visual eccentricity of an annulus (2.5°, 5.7°, 8.9°) and auditory pitch (250, 1000, 4000 Hz) of multisensory stimuli while subjects completed a simple detection task. We also varied the auditory bandwidth (pure tone vs. pink noise) across blocks of trials that a subject completed. To ensure attention to both modalities, multisensory stimuli were equi-probable with both unisensory visual and unisensory auditory trials that themselves varied along the abovementioned dimensions. Median reaction times for each stimulus condition as well as the percentage gain/loss of each multisensory condition vs. the best constituent unisensory condition were measured. The preliminary results reveal that multisensory interactions (as measured from simple reaction times) are indeed heterogeneous across the tested dimensions and may provide a means for delimiting the anatomo-functional substrates of behaviorally-relevant early–latency neural response interactions. Interestingly, preliminary results suggest selective interactions for visual stimuli when presented with broadband stimuli but not when presented with pure tones. More precisely, centrally presented visual stimuli show the greatest index of multisensory facilitation when coupled to a high pitch tone embedded in pink noise, while visual stimuli presented at approximately 5.7° of visual angle show the greatest slowing of reaction times.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"89-89"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluative similarity hypothesis of crossmodal correspondences: A developmental view","authors":"D. Janković","doi":"10.1163/187847612X647603","DOIUrl":"https://doi.org/10.1163/187847612X647603","url":null,"abstract":"Crossmodal correspondences have been widely demonstrated, although mechanisms that stand behind the phenomenon have not been fully established yet. According to the Evaluative similarity hypothesis crossmodal correspondences are influenced by evaluative (affective) similarity of stimuli from different sensory modalities (Jankovic, 2010, Journal of Vision 10(7), 859). From this view, detection of similar evaluative information in stimulation from different sensory modalities facilitates crossmodal correspondences and multisensory integration. The aim of this study was to explore the evaluative similarity hypothesis of crossmodal correspondences in children. In Experiment 1 two groups of participants (nine- and thirteen-year-olds) were asked to make explicit matches between presented auditory stimuli (1 s long sound clips) and abstract visual patterns. In Experiment 2 the same participants judged abstract visual patterns and auditory stimuli on the set of evaluative attributes measuring affective valence and arousal. The results showed that crossmodal correspondences are mostly influenced by evaluative similarity of visual and auditory stimuli in both age groups. The most frequently matched were visual and auditory stimuli congruent in both valence and arousal, followed by stimuli congruent in valence, and finally stimuli congruent in arousal. Evaluatively incongruent stimuli demonstrated low crossmodal associations especially in older group.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"13 1","pages":"127-127"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647603","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial codes for movement coordination do not depend on developmental vision","authors":"T. Heed, B. Roeder","doi":"10.1163/187847612X646721","DOIUrl":"https://doi.org/10.1163/187847612X646721","url":null,"abstract":"When people make oscillating right–left movements with their two index fingers while holding their hands palms down, they find it easier to move the fingers symmetrically (i.e., both fingers towards the middle, then both fingers to the outside) than parallel (i.e., both fingers towards the left, then both fingers towards the right). It was originally proposed that this effect is due to concurrent activation of homologous muscles in the two hands. However, symmetric movements are also easier when one of the hands is turned palm up, thus requiring concurrent use of opposing rather than homologous muscles. This was interpreted to indicate that movement coordination relies on perceptual rather than muscle-based information (Mechsner et al., 2001). The current experiment tested whether the spatial code used in this task depends on vision. Participants made either symmetrical or parallel right–left movements with their two index fingers while their palms were either both facing down, both facing up, or one facing up and one down. Neither in sighted nor in congenitally blind participants did movement execution depend on hand posture. Rather, both groups were always more efficient when making symmetrical rather than parallel movements with respect to external space. We conclude that the spatial code used for movement coordination does not crucially depend on vision. Furthermore, whereas congenitally blind people predominately use body-based (somatotopic) spatial coding in perceptual tasks (Roder et al., 2007), they use external spatial codes in movement tasks, with performance indistinguishable from the sighted.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"51-51"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646721","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Age-related changes in temporal processing of vestibular stimuli","authors":"Alex K. Malone, N. N. Chang, T. Hullar","doi":"10.1163/187847612X647847","DOIUrl":"https://doi.org/10.1163/187847612X647847","url":null,"abstract":"Falls are one of the leading causes of disability in the elderly. Previous research has shown that falls may be related to changes in the temporal integration of multisensory stimuli. This study compared the temporal integration and processing of a vestibular and auditory stimulus in younger and older subjects. The vestibular stimulus consisted of a continuous sinusoidal rotational velocity delivered using a rotational chair and the auditory stimulus consisted of 5 ms of white noise presented dichotically through headphones (both at 0.5 Hz). Simultaneity was defined as perceiving the chair being at its furthest rightward or leftward trajectory at the same moment as the auditory stimulus was perceived in the contralateral ear. The temporal offset of the auditory stimulus was adjusted using a method of constant stimuli so that the auditory stimulus either led or lagged true simultaneity. 15 younger (ages 21–27) and 12 older (ages 63–89) healthy subjects were tested using a two alternative forced choice task to determine at what times they perceived the two stimuli as simultaneous. Younger subjects had a mean temporal binding window of 334 ± 37 ms (mean ± SEM) and a mean point of subjective simultaneity of 83 ± 15 ms. Older subjects had a mean TBW of 556 ± 36 ms and a mean point of subjective simultaneity of 158 ± 27. Both differences were significant indicating that older subjects have a wider temporal range over which they integrate vestibular and auditory stimuli than younger subjects. These findings were consistent upon retesting and were not due to differences in vestibular perception thresholds.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"153-153"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647847","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recovery periods of event-related potentials indicating crossmodal interactions between the visual, auditory and tactile system","authors":"Marlene Hense, Boukje Habets, B. Roeder","doi":"10.1163/187847612X647478","DOIUrl":"https://doi.org/10.1163/187847612X647478","url":null,"abstract":"In sequential unimodal stimulus designs the time it takes for an event-related potential (ERP)-amplitude to recover is often interpreted as a transient decrement in responsiveness of the generating cortical circuits. This effect has been called neural refractoriness, which is the larger the more similar the repeated stimuli are and thus indicates the degree of overlap between the neural generator systems activated by two sequential stimuli. We hypothesize that crossmodal refractoriness-effects in a crossmodal sequential design might be a good parameter to assess the ‘modality overlap’ in the involved neural generators and the degree of crossmodal interaction. In order to investigate crossmodal ERP refractory period effects we presented visual and auditory (Experiment 1) and visual and tactile stimuli (Experiment 2) with inter stimulus intervals of 1 and 2 s to adult participants. Participants had to detect rare auditory and visual stimuli. Both, intra- and crossmodal ISI effects for all modalities were found for three investigated ERP-deflections (P1, N1, P2). The topography of the crossmodal refractory period effect of the N1- and P2-deflections in Experiment 1 and of P1 and N1 in Experiment 2 of both modalities was similar to the corresponding intramodal refractory effect, yet more confined and crossmodal effects were generally weaker. The crossmodal refractory effect for the visual P1, however, had a distinct, less circumscribed topography with respect to the intramodal effect. These results suggest that ERP refractory effects might be a promising indicator of the neural correlates of crossmodal interactions.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"9 1","pages":"114-114"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647478","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Neufeld, C. Sinke, Daniel Wiswede, H. Emrich, S. Bleich, G. Szycik
{"title":"Multisensory processes in the synaesthetic brain — An event-related potential study in multisensory competition situations","authors":"J. Neufeld, C. Sinke, Daniel Wiswede, H. Emrich, S. Bleich, G. Szycik","doi":"10.1163/187847612X647333","DOIUrl":"https://doi.org/10.1163/187847612X647333","url":null,"abstract":"In synaesthesia certain external stimuli (e.g., music) trigger automatically internally generated sensations (e.g., colour). Results of behavioural investigations indicate that multisensory processing works differently in synaesthetes. However, the reasons for these differences and the underlying neural correlates remain unclear. The aim of the current study was to investigate if synaesthetes show differences in electrophysiological components of multimodal processing. Further we wanted to test synaesthetes for an enhanced distractor filtering ability in multimodal situations. Therefore, line drawings of animals and objects were presented to participants, either with congruent (typical sound for presented picture, e.g., picture of bird together with chirp), incongruent (picture of bird together with gun shot) or without simultaneous auditory stimulation. 14 synaesthetes (auditory–visual and grapheme-colour synaesthetes) and 13 controls participated in the study. We found differences in the event-related potentials between synaesthetes and controls, indicating an altered multisensory processing of bimodal stimuli in synaesthetes in competition situations. These differences were especially found over frontal brain sites. An interaction effect between group (synaesthetes vs. controls) and stimulation (unimodal visual vs. congruent multimodal) could not be detected. Therefore we conclude that multisensory processing works in general similar in synaesthetes and controls and that only specifically integration processes in multisensory competition situations are altered in synaesthetes.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"101-101"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647333","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Grabowecky, Emmanuel Guzman-Martinez, L. Ortega, Satoru Suzuki
{"title":"An invisible speaker can facilitate auditory speech perception","authors":"M. Grabowecky, Emmanuel Guzman-Martinez, L. Ortega, Satoru Suzuki","doi":"10.1163/187847612X647801","DOIUrl":"https://doi.org/10.1163/187847612X647801","url":null,"abstract":"Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"148-148"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647801","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}