{"title":"Noisy-channel language comprehension in aphasia: A Bayesian mixture modeling approach.","authors":"Rachel Ryskin, Edward Gibson, Swathi Kiran","doi":"10.3758/s13423-025-02639-z","DOIUrl":"https://doi.org/10.3758/s13423-025-02639-z","url":null,"abstract":"<p><p>Individuals with \"agrammatic\" receptive aphasia have long been known to rely on semantic plausibility rather than syntactic cues when interpreting sentences. In contrast to early interpretations of this pattern as indicative of a deficit in syntactic knowledge, a recent proposal views agrammatic comprehension as a case of \"noisy-channel\" language processing with an increased expectation of noise in the input relative to healthy adults. Here, we investigate the nature of the noise model in aphasia and whether it is adapted to the statistics of the environment. We first replicate findings that a) healthy adults (N = 40) make inferences about the intended meaning of a sentence by weighing the prior probability of an intended sentence against the likelihood of a noise corruption and b) their estimate of the probability of noise increases when there are more errors in the input (manipulated via exposure sentences). We then extend prior findings that adults with chronic post-stroke aphasia (N = 28) and healthy age-matched adults (N = 19) similarly engage in noisy-channel inference during comprehension. We use a hierarchical latent mixture modeling approach to account for the fact that rates of guessing are likely to differ between healthy controls and individuals with aphasia and capture individual differences in the tendency to make inferences. We show that individuals with aphasia are more likely than healthy controls to draw noisy-channel inferences when interpreting semantically implausible sentences, even when group differences in the tendency to guess are accounted for. While healthy adults rapidly adapt their inference rates to an increase in noise in their input, whether individuals with aphasia do the same remains equivocal. Further investigation of comprehension through a noisy-channel lens holds promise for a parsimonious understanding of language processing in aphasia and may suggest potential avenues for treatment.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143060498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Taking time: Auditory statistical learning benefits from distributed exposure.","authors":"Jasper de Waard, Jan Theeuwes, Louisa Bogaerts","doi":"10.3758/s13423-024-02634-w","DOIUrl":"https://doi.org/10.3758/s13423-024-02634-w","url":null,"abstract":"<p><p>In an auditory statistical learning paradigm, listeners learn to partition a continuous stream of syllables by discovering the repeating syllable patterns that constitute the speech stream. Here, we ask whether auditory statistical learning benefits from spaced exposure compared with massed exposure. In a longitudinal online study on Prolific, we exposed 100 participants to the regularities in a spaced way (i.e., with exposure blocks spread out over 3 days) and another 100 in a massed way (i.e., with all exposure blocks lumped together on a single day). In the exposure phase, participants listened to streams composed of pairs while responding to a target syllable. The spaced and massed groups exhibited equal learning during exposure, as indicated by a comparable response-time advantage for predictable target syllables. However, in terms of resulting long-term knowledge, we observed a benefit from spaced exposure. Following a 2-week delay period, we tested participants' knowledge of the pairs in a forced-choice test. While both groups performed above chance, the spaced group had higher accuracy. Our findings speak to the importance of the timing of exposure to structured input and also for statistical learning outside of the laboratory (e.g., in language development), and imply that current investigations of auditory statistical learning likely underestimate human statistical learning abilities.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143010497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The impact of relative word-length on effects of non-adjacent word transpositions.","authors":"Yun Wen, Jonathan Grainger","doi":"10.3758/s13423-024-02637-7","DOIUrl":"https://doi.org/10.3758/s13423-024-02637-7","url":null,"abstract":"<p><p>A recent study (Wen et al., Journal of Experimental Psychology: Human Perception and Performance, 50: 934-941, 2024) found no influence of relative word-length on transposed-word effects. However, following the tradition of prior research on effects of transposed words during sentence reading, the transposed words in that study were adjacent words (words at positions 2 and 3 or 3 and 4 in five-word sequences). We surmised that the absence of an influence of relative word-length might be due to word identification being too precise when the two words are located close to eye-fixation location, hence cancelling the impact of more approximate indices of word identity such as word length. We therefore hypothesized that relative word-length might impact on transposed-word effects when the transposition involves non-adjacent words. The present study put this hypothesis to test and found that relative word-length does modify the size of transposed-word effects with non-adjacent transpositions. Transposed-word effects are greater when the transposed words have the same length. Furthermore, a cross-study analysis confirmed that transposed-word effects are greater for adjacent than for non-adjacent transpositions.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143010500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Age-related differences in information, but not task control in the color-word Stroop task.","authors":"Eldad Keha, Daniela Aisenberg-Shafran, Shachar Hochman, Eyal Kalanthroff","doi":"10.3758/s13423-024-02631-z","DOIUrl":"https://doi.org/10.3758/s13423-024-02631-z","url":null,"abstract":"<p><p>Older adults were found to struggle with tasks that require cognitive control. One task that measures the ability to exert cognitive control is the color-word Stroop task. Almost all studies that tested cognitive control in older adults using the Stroop task have focused on one type of control - Information control. In the present work, we ask whether older adults also show a deficit in another type of cognitive control - Task control. To that end, we tested older and younger adults by isolating and measuring two types of conflict - information conflict and task conflict. Information conflict was measured by the difference between color identification of incongruent color words and color identification of neutral words, while task conflict was measured by the difference between color identification of neutral words and color identification of neutral symbols and by the reverse facilitation effect. We tested how the behavioral markers of these two types of conflicts are affected under low task control conditions, which is essential for measuring task conflict behaviorally. Older adults demonstrated a deficit in information control by showing a larger information conflict marker, but not in task control markers, as no differences in task conflict were found between younger and older adults. These findings supported previous studies that work against theories that link the larger Stroop interference in older adults to a generic slowdown or a generic inhibitory failure. We discussed the relevancy of the results and future research directions in line with other Stroop studies that tested age-related differences in different control mechanisms.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143010493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distinct detection and discrimination sensitivities in visual processing of real versus unreal optic flow.","authors":"Li Li, Xuechun Shen, Shuguang Kuai","doi":"10.3758/s13423-024-02616-y","DOIUrl":"https://doi.org/10.3758/s13423-024-02616-y","url":null,"abstract":"<p><p>We examined the intricate mechanisms underlying visual processing of complex motion stimuli by measuring the detection sensitivity to contraction and expansion patterns and the discrimination sensitivity to the location of the center of motion (CoM) in various real and unreal optic flow stimuli. We conducted two experiments (N = 20 each) and compared responses to both \"real\" optic flow stimuli containing information about self-movement in a three-dimensional scene and \"unreal\" optic flow stimuli lacking such information. We found that detection sensitivity to contraction surpassed that to expansion patterns for unreal optic flow stimuli, whereas this trend was reversed for real optic flow stimuli. Furthermore, while discrimination sensitivity to the CoM location was not affected by stimulus duration for unreal optic flow stimuli, it showed a significant improvement when stimulus duration increased from 100 to 400 ms for real optic flow stimuli. These findings provide compelling evidence that the visual system employs distinct processing approaches for real versus unreal optic flow even when they are perfectly matched for two-dimensional global features and local motion signals. These differences reveal influences of self-movement in natural environments, enabling the visual system to uniquely process stimuli with significant survival implications.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The cost of perspective switching: Constraints on simultaneous activation.","authors":"Dorit Segal","doi":"10.3758/s13423-024-02633-x","DOIUrl":"https://doi.org/10.3758/s13423-024-02633-x","url":null,"abstract":"<p><p>Visual perspective taking often involves transitioning between perspectives, yet the cognitive mechanisms underlying this process remain unclear. The current study draws on insights from task- and language-switching research to address this gap. In Experiment 1, 79 participants judged the perspective of an avatar positioned in various locations, observing either the rectangular or the square side of a rectangular cube hanging from the ceiling. The avatar's perspective was either consistent or inconsistent with the participant's, and its computation sometimes required mental transformation. The task included both single-position blocks, in which the avatar's location remained fixed across all trials, and mixed-position blocks, in which the avatar's position changed across trials. Performance was compared across trial types and positions. In Experiment 2, 126 participants completed a similar task administered online, with more trials, and performance was compared at various points within the response time distribution (vincentile analysis). Results revealed a robust switching cost. However, mixing costs, which reflect the ability to maintain multiple task sets active in working memory, were absent, even in slower response times. Additionally, responses to the avatar's position varied as a function of consistency with the participants' viewpoint and the angular disparity between them. These findings suggest that perspective switching is costly, people cannot activate multiple perspectives simultaneously, and the computation of other people's visual perspectives varies with cognitive demands.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142979863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Do we feel colours? A systematic review of 128 years of psychological research linking colours and emotions.","authors":"Domicele Jonauskaite, Christine Mohr","doi":"10.3758/s13423-024-02615-z","DOIUrl":"https://doi.org/10.3758/s13423-024-02615-z","url":null,"abstract":"<p><p>Colour is an integral part of natural and constructed environments. For many, it also has an aesthetic appeal, with some colours being more pleasant than others. Moreover, humans seem to systematically and reliably associate colours with emotions, such as yellow with joy, black with sadness, light colours with positive and dark colours with negative emotions. To systematise such colour-emotion correspondences, we identified 132 relevant peer-reviewed articles published in English between 1895 and 2022. These articles covered a total of 42,266 participants from 64 different countries. We found that all basic colour categories had systematic correspondences with affective dimensions (valence, arousal, power) as well as with discrete affective terms (e.g., love, happy, sad, bored). Most correspondences were many-to-many, with systematic effects driven by lightness, saturation, and hue ('colour temperature'). More specifically, (i) LIGHT and DARK colours were associated with positive and negative emotions, respectively; (ii) RED with empowering, high arousal positive and negative emotions; (iii) YELLOW and ORANGE with positive, high arousal emotions; (iv) BLUE, GREEN, GREEN-BLUE, and WHITE with positive, low arousal emotions; (v) PINK with positive emotions; (vi) PURPLE with empowering emotions; (vii) GREY with negative, low arousal emotions; and (viii) BLACK with negative, high arousal emotions. Shared communication needs might explain these consistencies across studies, making colour an excellent medium for communication of emotion. As most colour-emotion correspondences were tested on an abstract level (i.e., associations), it remains to be seen whether such correspondences translate to the impact of colour on experienced emotions and specific contexts.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142979847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sean Devine, Y Doug Dong, Martin Sellier Silva, Mathieu Roy, A Ross Otto
{"title":"Increased attention towards progress information near a goal state.","authors":"Sean Devine, Y Doug Dong, Martin Sellier Silva, Mathieu Roy, A Ross Otto","doi":"10.3758/s13423-024-02636-8","DOIUrl":"https://doi.org/10.3758/s13423-024-02636-8","url":null,"abstract":"<p><p>A growing body of evidence across psychology suggests that (cognitive) effort exertion increases in proximity to a goal state. For instance, previous work has shown that participants respond more quickly, but not less accurately, when they near a goal-as indicated by a filling progress bar. Yet it remains unclear when over the course of a cognitively demanding task do people monitor progress information: Do they continuously monitor their goal progress over the course of a task, or attend more frequently to it as they near their goal? To answer this question, we used eye-tracking to examine trial-by-trial changes in progress monitoring as participants completed blocks of an attentionally demanding oddball task. Replicating past work, we found that participants increased cognitive effort exertion near a goal, as evinced by an increase in correct responses per second. More interestingly, we found that the rate at which participants attended to goal progress information-operationalized here as the frequency of gazes towards a progress bar-increased steeply near a goal state. In other words, participants extracted information from the progress bar at a higher rate when goals were proximal (versus distal). In exploratory analysis of tonic pupil diameter, we also found that tonic pupil size increased sharply as participants approached a goal state, mirroring the pattern of gaze. These results support the view that people attend to progress information more as they approach a goal.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142979855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Product, not process: Metacognitive monitoring of visual performance during sustained attention.","authors":"Cheongil Kim, Sang Chul Chong","doi":"10.3758/s13423-024-02635-9","DOIUrl":"https://doi.org/10.3758/s13423-024-02635-9","url":null,"abstract":"<p><p>The performance of the human visual system exhibits moment-to-moment fluctuations influenced by multiple neurocognitive factors. To deal with this instability of the visual system, introspective awareness of current visual performance (metacognitive monitoring) may be crucial. In this study, we investigate whether and how people can monitor their own visual performance during sustained attention by adopting confidence judgments as indicators of metacognitive monitoring - assuming that if participants can monitor visual performance, confidence judgments will accurately track performance fluctuations. In two experiments (N <math><mo>=</mo></math> 40), we found that participants were able to monitor fluctuations in visual performance during sustained attention. Importantly, metacognitive monitoring largely relied on the quality of target perception, a product of visual processing (\"I lack confidence in my performance because I only caught a glimpse of the target\"), rather than the states of the visual system during visual processing (\"I lack confidence because I was not focusing on the task\").</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142953964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cracking arbitrariness: A data-driven study of auditory iconicity in spoken English.","authors":"Andrea Gregor de Varda, Marco Marelli","doi":"10.3758/s13423-024-02630-0","DOIUrl":"https://doi.org/10.3758/s13423-024-02630-0","url":null,"abstract":"<p><p>Auditory iconic words display a phonological profile that imitates their referents' sounds. Traditionally, those words are thought to constitute a minor portion of the auditory lexicon. In this article, we challenge this assumption by assessing the pervasiveness of onomatopoeia in the English auditory vocabulary through a novel data-driven procedure. We embed spoken words and natural sounds into a shared auditory space through (a) a short-time Fourier transform, (b) a convolutional neural network trained to classify sounds, and (c) a network trained on speech recognition. Then, we employ the obtained vector representations to measure their objective auditory resemblance. These similarity indexes show that imitation is not limited to some circumscribed semantic categories, but instead can be considered as a widespread mechanism underlying the structure of the English auditory vocabulary. We finally empirically validate our similarity indexes as measures of iconicity against human judgments.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142953962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}