Riham Hafez Mohamed, Niloufar Ansari, Bahaa Abdeljawad, Celina Valdivia, Abigail Edwards, Kaitlyn M A Parks, Yassaman Rafat, Ryan A Stevenson
{"title":"Multisensory Integration of Native and Nonnative Speech in Bilingual and Monolingual Adults.","authors":"Riham Hafez Mohamed, Niloufar Ansari, Bahaa Abdeljawad, Celina Valdivia, Abigail Edwards, Kaitlyn M A Parks, Yassaman Rafat, Ryan A Stevenson","doi":"10.1163/22134808-bja10132","DOIUrl":"https://doi.org/10.1163/22134808-bja10132","url":null,"abstract":"<p><p>Face-to-face speech communication is an audiovisual process during which the interlocuters use both the auditory speech signals as well as visual, oral articulations to understand the other. These sensory inputs are merged into a single, unified process known as multisensory integration. Audiovisual speech integration is known to be influenced by many factors, including listener experience. In this study, we investigated the roles of bilingualism and language experience on integration. We used a McGurk paradigm in which participants were presented with incongruent auditory and visual speech. This included an auditory utterance of 'ba' paired with visual articulations of 'ga' that often induce the perception of 'da' or 'tha', a fusion effect that is strong evidence of integration, as well as an auditory utterance of 'ga' paired with visual articulations of 'ba' that often induce the perception of 'bga', a combination effect that is weaker evidence of integration. We compared fusion and combination effects on three groups ( N = 20 each), English monolinguals, Spanish-English bilinguals, and Arabic-English bilinguals, with stimuli presented in all three languages. Monolinguals exhibited significantly stronger multisensory integration than bilinguals in fusion effects, regardless of the stimulus language. Bilinguals exhibited a nonsignificant trend by which greater experience led to increased integration as measured by fusion. These results held regardless of whether McGurk presentations were presented as stand-alone syllables or in the context of real words.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Max Teaford, Zachary J Mularczyk, Alannah Gernon, Daniel M Merfeld
{"title":"The Impact of Viewing Distance and Proprioceptive Manipulations on a Virtual Reality Based Balance Test.","authors":"Max Teaford, Zachary J Mularczyk, Alannah Gernon, Daniel M Merfeld","doi":"10.1163/22134808-bja10131","DOIUrl":"https://doi.org/10.1163/22134808-bja10131","url":null,"abstract":"<p><p>Our ability to maintain our balance plays a pivotal role in day-to-day activities. This ability is believed to be the result of interactions between several sensory modalities including vision and proprioception. Past research has revealed that different aspects of vision including relative visual motion (i.e., sensed motion of the visual field due to head motion), which can be manipulated by changing the viewing distance between the individual and the predominant visual cues, have an impact on balance. However, only a small number of studies have examined this in the context of virtual reality, and none examined the impact of proprioceptive manipulations for viewing distances greater than 3.5 m. To address this, we conducted an experiment in which 25 healthy adults viewed a dartboard in a virtual gymnasium while standing in narrow stance on firm and compliant surfaces. The dartboard distance varied with three different conditions of 1.5 m, 6 m, and 24 m, including a blacked-out condition. Our results indicate that decreases in relative visual motion, due to an increased viewing distance, yield decreased postural stability - but only with simultaneous proprioceptive disruptions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What is the Relation between Chemosensory Perception and Chemosensory Mental Imagery?","authors":"Charles Spence","doi":"10.1163/22134808-bja10130","DOIUrl":"https://doi.org/10.1163/22134808-bja10130","url":null,"abstract":"<p><p>The study of chemosensory mental imagery is undoubtedly made more difficult because of the profound individual differences that have been reported in the vividness of (e.g.) olfactory mental imagery. At the same time, the majority of those researchers who have attempted to study people's mental imagery abilities for taste (gustation) have actually mostly been studying flavour mental imagery. Nevertheless, there exists a body of human psychophysical research showing that chemosensory mental imagery exhibits a number of similarities with chemosensory perception. Furthermore, the two systems have frequently been shown to interact with one another, the similarities and differences between chemosensory perception and chemosensory mental imagery at the introspective, behavioural, psychophysical, and cognitive neuroscience levels in humans are considered in this narrative historical review. The latest neuroimaging evidence show that many of the same brain areas are engaged by chemosensory mental imagery as have previously been documented to be involved in chemosensory perception. That said, the pattern of neural connectively is reversed between the 'top-down' control of chemosensory mental imagery and the 'bottom-up' control seen in the case of chemosensory perception. At the same time, however, there remain a number of intriguing questions as to whether it is even possible to distinguish between orthonasal and retronasal olfactory mental imagery, and the extent to which mental imagery for flavour, which most people not only describe as, but also perceive to be, the 'taste' of food and drink, is capable of reactivating the entire flavour network in the human brain.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
EunSeon Ahn, Areti Majumdar, Taraz G Lee, David Brang
{"title":"Evidence for a Causal Dissociation of the McGurk Effect and Congruent Audiovisual Speech Perception via TMS to the Left pSTS.","authors":"EunSeon Ahn, Areti Majumdar, Taraz G Lee, David Brang","doi":"10.1163/22134808-bja10129","DOIUrl":"10.1163/22134808-bja10129","url":null,"abstract":"<p><p>Congruent visual speech improves speech perception accuracy, particularly in noisy environments. Conversely, mismatched visual speech can alter what is heard, leading to an illusory percept that differs from the auditory and visual components, known as the McGurk effect. While prior transcranial magnetic stimulation (TMS) and neuroimaging studies have identified the left posterior superior temporal sulcus (pSTS) as a causal region involved in the generation of the McGurk effect, it remains unclear whether this region is critical only for this illusion or also for the more general benefits of congruent visual speech (e.g., increased accuracy and faster reaction times). Indeed, recent correlative research suggests that the benefits of congruent visual speech and the McGurk effect rely on largely independent mechanisms. To better understand how these different features of audiovisual integration are causally generated by the left pSTS, we used single-pulse TMS to temporarily disrupt processing within this region while subjects were presented with either congruent or incongruent (McGurk) audiovisual combinations. Consistent with past research, we observed that TMS to the left pSTS reduced the strength of the McGurk effect. Importantly, however, left pSTS stimulation had no effect on the positive benefits of congruent audiovisual speech (increased accuracy and faster reaction times), demonstrating a causal dissociation between the two processes. Our results are consistent with models proposing that the pSTS is but one of multiple critical areas supporting audiovisual speech interactions. Moreover, these data add to a growing body of evidence suggesting that the McGurk effect is an imperfect surrogate measure for more general and ecologically valid audiovisual speech behaviors.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11388023/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liesbeth Gijbels, Jason D Yeatman, Kaylah Lalonde, Piper Doering, Adrian K C Lee
{"title":"Audiovisual Speech Perception Benefits are Stable from Preschool through Adolescence.","authors":"Liesbeth Gijbels, Jason D Yeatman, Kaylah Lalonde, Piper Doering, Adrian K C Lee","doi":"10.1163/22134808-bja10128","DOIUrl":"10.1163/22134808-bja10128","url":null,"abstract":"<p><p>The ability to leverage visual cues in speech perception - especially in noisy backgrounds - is well established from infancy to adulthood. Yet, the developmental trajectory of audiovisual benefits stays a topic of debate. The inconsistency in findings can be attributed to relatively small sample sizes or tasks that are not appropriate for given age groups. We designed an audiovisual speech perception task that was cognitively and linguistically age-appropriate from preschool to adolescence and recruited a large sample ( N = 161) of children (age 4-15). We found that even the youngest children show reliable speech perception benefits when provided with visual cues and that these benefits are consistent throughout development when auditory and visual signals match. Individual variability is explained by how the child experiences their speech-in-noise performance rather than the quality of the signal itself. This underscores the importance of visual speech for young children who are regularly in noisy environments like classrooms and playgrounds.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gözde Filiz, Simon Bérubé, Claudia Demers, Frank Cloutier, Angela Chen, Valérie Pek, Émilie Hudon, Josiane Bolduc-Bégin, Johannes Frasnelli
{"title":"Can Multisensory Olfactory Training Improve Olfactory Dysfunction Caused by COVID-19?","authors":"Gözde Filiz, Simon Bérubé, Claudia Demers, Frank Cloutier, Angela Chen, Valérie Pek, Émilie Hudon, Josiane Bolduc-Bégin, Johannes Frasnelli","doi":"10.1163/22134808-bja10127","DOIUrl":"10.1163/22134808-bja10127","url":null,"abstract":"<p><p>Approximately 30-60% of people suffer from olfactory dysfunction (OD) such as hyposmia or anosmia after being diagnosed with COVID-19; 15-20% of these cases last beyond resolution of the acute phase. Previous studies have shown that olfactory training can be beneficial for patients affected by OD caused by viral infections of the upper respiratory tract. The aim of the study is to evaluate whether a multisensory olfactory training involving simultaneously tasting and seeing congruent stimuli is more effective than the classical olfactory training. We recruited 68 participants with persistent OD for two months or more after COVID-19 infection; they were divided into three groups. One group received olfactory training which involved smelling four odorants (strawberry, cheese, coffee, lemon; classical olfactory training). The other group received the same olfactory stimuli but presented retronasally (i.e., as droplets on their tongue); while simultaneous and congruent gustatory (i.e., sweet, salty, bitter, sour) and visual (corresponding images) stimuli were presented (multisensory olfactory training). The third group received odorless propylene glycol in four bottles (control group). Training was carried out twice daily for 12 weeks. We assessed olfactory function and olfactory specific quality of life before and after the intervention. Both intervention groups showed a similar significant improvement of olfactory function, although there was no difference in the assessment of quality of life. Both multisensory and classical training can be beneficial for OD following a viral infection; however, only the classical olfactory training paradigm leads to an improvement that was significantly stronger than the control group.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Glassware Influences the Perception of Orange Juice in Simulated Naturalistic versus Urban Conditions.","authors":"Chunmao Wu, Pei Li, Charles Spence","doi":"10.1163/22134808-bja10126","DOIUrl":"10.1163/22134808-bja10126","url":null,"abstract":"<p><p>The latest research demonstrates that people's perception of orange juice can be influenced by the shape/type of receptacle in which it happens to be served. Two studies are reported that were designed to investigate the impact, if any, that the shape/type of glass might exert over the perception of the contents, the emotions induced on tasting the juice and the consumer's intention to purchase orange juice. The same quantity of orange juice (100 ml) was presented and evaluated in three different glasses: a straight-sided, a curved and a tapered glass. Questionnaires were used to assess taste (aroma, flavour intensity, sweetness, freshness and fruitiness), pleasantness and intention to buy orange juice. Study 2 assessed the impact of the same three glasses in two digitally rendered atmospheric conditions (nature vs urban). In Study 1, the perceived sweetness and pleasantness of the orange juice was significantly influenced by the shape/type of the glass in which it was presented. Study 2 reported significant interactions between condition (nature vs urban) and glass shape (tapered, straight-sided and curved). Perceived aroma, flavour intensity and pleasantness were all significantly affected by the simulated audiovisual context or atmosphere. Compared to the urban condition, perceived aroma, freshness, fruitiness and pleasantness were rated significantly higher in the nature condition. On the other hand, flavour intensity and sweetness were rated significantly higher in the urban condition than in the natural condition. These results are likely to be relevant for those interested in providing food services, or company managers offering beverages to their customers.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141421790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual Adaptation to Noise-Vocoded Speech by Lip-Read Information: No Difference between Dyslexic and Typical Readers.","authors":"Faezeh Pourhashemi, Martijn Baart, Jean Vroomen","doi":"10.1163/22134808-bja10125","DOIUrl":"10.1163/22134808-bja10125","url":null,"abstract":"<p><p>Auditory speech can be difficult to understand but seeing the articulatory movements of a speaker can drastically improve spoken-word recognition and, on the longer-term, it helps listeners to adapt to acoustically distorted speech. Given that individuals with developmental dyslexia (DD) have sometimes been reported to rely less on lip-read speech than typical readers, we examined lip-read-driven adaptation to distorted speech in a group of adults with DD ( N = 29) and a comparison group of typical readers ( N = 29). Participants were presented with acoustically distorted Dutch words (six-channel noise-vocoded speech, NVS) in audiovisual training blocks (where the speaker could be seen) interspersed with audio-only test blocks. Results showed that words were more accurately recognized if the speaker could be seen (a lip-read advantage), and that performance steadily improved across subsequent auditory-only test blocks (adaptation). There were no group differences, suggesting that perceptual adaptation to disrupted spoken words is comparable for dyslexic and typical readers. These data open up a research avenue to investigate the degree to which lip-read-driven speech adaptation generalizes across different types of auditory degradation, and across dyslexic readers with decoding versus comprehension difficulties.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is Front associated with Above and Back with Below? Association between Allocentric Representations of Spatial Dimensions.","authors":"Lari Vainio, Martti Vainio","doi":"10.1163/22134808-bja10124","DOIUrl":"10.1163/22134808-bja10124","url":null,"abstract":"<p><p>Previous research has revealed congruency effects between different spatial dimensions such as right and up. In the audiovisual context, high-pitched sounds are associated with the spatial dimensions of up/above and front, while low-pitched sounds are associated with the spatial dimensions of down/below and back. This opens the question of whether there could also be a spatial association between above and front and/or below and back. Participants were presented with a high- or low-pitch stimulus at the time of the onset of the visual stimulus. In one block, participants responded according to the above/below location of the visual target stimulus if the target appeared in front of the reference object, and in the other block, they performed these above/below responses if the target appeared at the back of the reference. In general, reaction times revealed an advantage in processing the target location in the front-above and back-below locations. The front-above/back-below effect was more robust concerning the back-below component of the effect, and significantly larger in reaction times that were slower rather than faster than the median value of a participant. However, the pitch did not robustly influence responding to front/back or above/below locations. We propose that this effect might be based on the conceptual association between different spatial dimensions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140960331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Revisiting the Deviation Effects of Irrelevant Sound on Serial and Nonserial Tasks.","authors":"Yu Nakajima, Hiroshi Ashida","doi":"10.1163/22134808-bja10123","DOIUrl":"10.1163/22134808-bja10123","url":null,"abstract":"<p><p>Two types of disruptive effects of irrelevant sound on visual tasks have been reported: the changing-state effect and the deviation effect. The idea that the deviation effect, which arises from attentional capture, is independent of task requirements, whereas the changing-state effect is specific to tasks that require serial processing, has been examined by comparing tasks that do or do not require serial-order processing. While many previous studies used the missing-item task as the nonserial task, it is unclear whether other cognitive tasks lead to similar results regarding the different task specificity of both effects. Kattner et al. (Memory and Cognition, 2023) used the mental-arithmetic task as the nonserial task, and failed to demonstrate the deviation effect. However, there were several procedural factors that could account for the lack of deviation effect, such as differences in design and procedures (e.g., conducted online, intermixed conditions). In the present study, we aimed to investigate whether the deviation effect could be observed in both the serial-recall and mental-arithmetic tasks when these procedural factors were modified. We found strong evidence of the deviation effect in both the serial-recall and the mental-arithmetic tasks when stimulus presentation and experimental design were aligned with previous studies that demonstrated the deviation effect (e.g., conducted in-person, blockwise presentation of sound, etc.). The results support the idea that the deviation effect is not task-specific.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140900337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}