Kálmán Tót, Noémi Harcsa-Pintér, Adél Papp, Balázs Bodosi, Attila Nagy, Gabriella Eördegh
{"title":"The Combined Effect of Visual Stimulus Complexity and Semantic Content on Audiovisual Associative Equivalence Learning","authors":"Kálmán Tót, Noémi Harcsa-Pintér, Adél Papp, Balázs Bodosi, Attila Nagy, Gabriella Eördegh","doi":"10.1002/brb3.70902","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>The Rutgers Acquired Equivalence Test (RAET) is an associative learning task that requires participants to learn pairs of visual stimuli and then recall and generalize these associations. To further explore this cognitive task, we developed three audiovisual learning tests with the same structure as the original RAET.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>Each audiovisual test applied the same four distinct auditory antecedents but differed in visual consequents in complexity and semantic content, that is, cartoon faces (SoundFace), colored fish (SoundFish), and geometric shapes (SoundPolygon), respectively. The present study investigated the effect of these different visual stimuli on performance in audiovisual associative equivalence learning. Learning performance was assessed across three phases: acquisition, retrieval, and generalization. A total of 52 participants (25 females, 27 males, mean age = 25.88 ± 10.28 years) completed the tasks. Statistical analyses, including Friedman's ANOVA and Wilcoxon matched-pairs tests with Bonferroni correction, were applied to evaluate differences in performance across the tests.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Participants consistently performed significantly (<i>p</i> < 0.01) better and responded faster in learning, retrieval, and generalization phases of the SoundFace test compared to the SoundFish and SoundPolygon tests, which did not significantly differ from each other. Additionally, a semantic association task confirmed that face and fish stimuli were significantly (<i>p</i> < 0.01) richer in semantic content than polygons, yet only face stimuli significantly facilitated audiovisual learning outcomes.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>These results suggest that the semantic content of visual stimuli—which could influence their verbalizability—is not sufficient on its own to enhance performance in audiovisual associative learning. Additionally, the number and variety of different features in visual stimulus sets (such as faces, fish, or polygons) may also significantly influence performance in audiovisual equivalence learning.</p>\n </section>\n </div>","PeriodicalId":9081,"journal":{"name":"Brain and Behavior","volume":"15 9","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2025-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/brb3.70902","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brain and Behavior","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/brb3.70902","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Background
The Rutgers Acquired Equivalence Test (RAET) is an associative learning task that requires participants to learn pairs of visual stimuli and then recall and generalize these associations. To further explore this cognitive task, we developed three audiovisual learning tests with the same structure as the original RAET.
Methods
Each audiovisual test applied the same four distinct auditory antecedents but differed in visual consequents in complexity and semantic content, that is, cartoon faces (SoundFace), colored fish (SoundFish), and geometric shapes (SoundPolygon), respectively. The present study investigated the effect of these different visual stimuli on performance in audiovisual associative equivalence learning. Learning performance was assessed across three phases: acquisition, retrieval, and generalization. A total of 52 participants (25 females, 27 males, mean age = 25.88 ± 10.28 years) completed the tasks. Statistical analyses, including Friedman's ANOVA and Wilcoxon matched-pairs tests with Bonferroni correction, were applied to evaluate differences in performance across the tests.
Results
Participants consistently performed significantly (p < 0.01) better and responded faster in learning, retrieval, and generalization phases of the SoundFace test compared to the SoundFish and SoundPolygon tests, which did not significantly differ from each other. Additionally, a semantic association task confirmed that face and fish stimuli were significantly (p < 0.01) richer in semantic content than polygons, yet only face stimuli significantly facilitated audiovisual learning outcomes.
Conclusion
These results suggest that the semantic content of visual stimuli—which could influence their verbalizability—is not sufficient on its own to enhance performance in audiovisual associative learning. Additionally, the number and variety of different features in visual stimulus sets (such as faces, fish, or polygons) may also significantly influence performance in audiovisual equivalence learning.
期刊介绍:
Brain and Behavior is supported by other journals published by Wiley, including a number of society-owned journals. The journals listed below support Brain and Behavior and participate in the Manuscript Transfer Program by referring articles of suitable quality and offering authors the option to have their paper, with any peer review reports, automatically transferred to Brain and Behavior.
* [Acta Psychiatrica Scandinavica](https://publons.com/journal/1366/acta-psychiatrica-scandinavica)
* [Addiction Biology](https://publons.com/journal/1523/addiction-biology)
* [Aggressive Behavior](https://publons.com/journal/3611/aggressive-behavior)
* [Brain Pathology](https://publons.com/journal/1787/brain-pathology)
* [Child: Care, Health and Development](https://publons.com/journal/6111/child-care-health-and-development)
* [Criminal Behaviour and Mental Health](https://publons.com/journal/3839/criminal-behaviour-and-mental-health)
* [Depression and Anxiety](https://publons.com/journal/1528/depression-and-anxiety)
* Developmental Neurobiology
* [Developmental Science](https://publons.com/journal/1069/developmental-science)
* [European Journal of Neuroscience](https://publons.com/journal/1441/european-journal-of-neuroscience)
* [Genes, Brain and Behavior](https://publons.com/journal/1635/genes-brain-and-behavior)
* [GLIA](https://publons.com/journal/1287/glia)
* [Hippocampus](https://publons.com/journal/1056/hippocampus)
* [Human Brain Mapping](https://publons.com/journal/500/human-brain-mapping)
* [Journal for the Theory of Social Behaviour](https://publons.com/journal/7330/journal-for-the-theory-of-social-behaviour)
* [Journal of Comparative Neurology](https://publons.com/journal/1306/journal-of-comparative-neurology)
* [Journal of Neuroimaging](https://publons.com/journal/6379/journal-of-neuroimaging)
* [Journal of Neuroscience Research](https://publons.com/journal/2778/journal-of-neuroscience-research)
* [Journal of Organizational Behavior](https://publons.com/journal/1123/journal-of-organizational-behavior)
* [Journal of the Peripheral Nervous System](https://publons.com/journal/3929/journal-of-the-peripheral-nervous-system)
* [Muscle & Nerve](https://publons.com/journal/4448/muscle-and-nerve)
* [Neural Pathology and Applied Neurobiology](https://publons.com/journal/2401/neuropathology-and-applied-neurobiology)