{"title":"When remembering less is more: Unfiltered items are associated with reduced memory fidelity in visual short-term memory","authors":"Young Seon Shin, Summer L. Sheremata","doi":"10.3758/s13414-024-02891-w","DOIUrl":"10.3758/s13414-024-02891-w","url":null,"abstract":"<div><p>Visual short-term memory (VSTM), the ability to store information no longer visible, is essential for human behavior. VSTM limits vary across the population and are correlated with overall cognitive ability. It has been proposed that low-memory individuals are unable to select only relevant items for storage and that these limitations are greatest when memory demands are high. However, it is unknown whether these effects simply reflect task difficulty and whether they impact the quality of memory representations. Here we varied the number of items presented, or set size, to investigate the effect of memory demands on the performance of visual short-term memory across low- and high-memory groups. Group differences emerged as set size exceeded memory limits, even when task difficulty was controlled. In a change-detection task, the low-memory group performed more poorly when set size exceeded their memory limits. We then predicted that low-memory individuals encoding items beyond measured memory limits would result in the degraded fidelity of memory representations. A continuous report task confirmed that low, but not high, memory individuals demonstrated decreased memory fidelity as set size exceeded measured memory limits. The current study demonstrates that items held in VSTM are stored distinctly across groups and task demands. These results link the ability to maintain high quality representations with overall cognitive ability.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140860368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Torin K. Clark, Raquel C. Galvan-Garza, Daniel M. Merfeld
{"title":"Intra-individual consistency of vestibular perceptual thresholds","authors":"Torin K. Clark, Raquel C. Galvan-Garza, Daniel M. Merfeld","doi":"10.3758/s13414-024-02886-7","DOIUrl":"10.3758/s13414-024-02886-7","url":null,"abstract":"<div><p>Vestibular perceptual thresholds quantify sensory noise associated with reliable perception of small self-motions. Previous studies have identified substantial variation between even healthy individuals’ thresholds. However, it remains unclear if or how an individual’s vestibular threshold varies over repeated measures across various time scales (repeated measurements on the same day, across days, weeks, or months). Here, we assessed yaw rotation and roll tilt thresholds in four individuals and compared this intra-individual variability to inter-individual variability of thresholds measured across a large age-matched cohort each measured only once. For analysis, we performed simulations of threshold measurements where there was no underlying variability (or it was manipulated) to compare to that observed empirically. We found remarkable consistency in vestibular thresholds within individuals, for both yaw rotation and roll tilt; this contrasts with substantial inter-individual differences. Thus, we conclude that vestibular perceptual thresholds are an innate characteristic, which validates pooling measures across sessions and potentially serves as a stable clinical diagnostic and/or biomarker.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140804251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyu Zhang, Jessica L. Irons, Heather A. Hansen, Andrew B. Leber
{"title":"Joint contributions of preview and task instructions on visual search strategy selection","authors":"Tianyu Zhang, Jessica L. Irons, Heather A. Hansen, Andrew B. Leber","doi":"10.3758/s13414-024-02870-1","DOIUrl":"10.3758/s13414-024-02870-1","url":null,"abstract":"<div><p>People tend to employ suboptimal attention control strategies during visual search. Here we question why people are suboptimal, specifically investigating how knowledge of the optimal strategies and the time available to apply such strategies affect strategy use. We used the Adaptive Choice Visual Search (ACVS), a task designed to assess attentional control optimality. We used explicit strategy instructions to manipulate explicit strategy knowledge, and we used display previews to manipulate time to apply the strategies. In the first two experiments, the strategy instructions increased optimality. However, the preview manipulation did not significantly boost optimality for participants who did not receive strategy instruction. Finally, in Experiments 3A and 3B, we jointly manipulated preview and instruction with a larger sample size. Preview and instruction both produced significant main effects; furthermore, they interacted significantly, such that the beneficial effect of instructions emerged with greater preview time. Taken together, these results have important implications for understanding the strategic use of attentional control. Individuals with explicit knowledge of the optimal strategy are more likely to exploit relevant information in their visual environment, but only to the extent that they have the time to do so.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02870-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140804384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Numerical values modulate size perception","authors":"Aviv Avitan, Dror Marom, Avishai Henik","doi":"10.3758/s13414-024-02875-w","DOIUrl":"10.3758/s13414-024-02875-w","url":null,"abstract":"<div><p>The link between various codes of magnitude and their interactions has been studied extensively for many years. In the current study, we examined how the physical and numerical magnitudes of digits are mapped into a combined mental representation. In two psychophysical experiments, participants reported the physically larger digit among two digits. In the identical condition, participants compared digits of an identical value (e.g., “2” and “2”); in the different condition, participants compared digits of distinct numerical values (i.e., “2” and “5”). As anticipated, participants overestimated the physical size of a numerically larger digit and underestimated the physical size of a numerically smaller digit. Our results extend the shared-representation account of physical and numerical magnitudes.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02875-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140624507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Top-down suppression of negative features applies flexibly contingent on visual search goals","authors":"Marlene Forstinger, Ulrich Ansorge","doi":"10.3758/s13414-024-02882-x","DOIUrl":"10.3758/s13414-024-02882-x","url":null,"abstract":"<div><p>Visually searching for a frequently changing target is assumed to be guided by flexible working memory representations of specific features necessary to discriminate targets from distractors. Here, we tested if these representations allow selective suppression or always facilitate perception based on search goals. Participants searched for a target (i.e., a horizontal bar) defined by one of two different negative features (e.g., not red vs. not blue; Experiment 1) or a positive (e.g., blue) versus a negative feature (Experiments 2 and 3). A prompt informed participants about the target identity, and search tasks alternated or repeated randomly. We used different peripheral singleton cues presented at the same (valid condition) or a different (invalid condition) position as the target to examine if negative features were suppressed depending on current instructions. In all experiments, cues with negative features elicited slower search times in valid than invalid trials, indicating suppression. Additionally, suppression of negative color cues tended to be selective when participants searched for the target by different negative features but generalized to negative and non-matching cue colors when switching between positive and negative search criteria was required. Nevertheless, when the same color – red – was used in positive and negative search tasks, red cues captured attention or were suppressed depending on whether red was positive or negative (Experiment 3). Our results suggest that working memory representations flexibly trigger suppression or attentional capture contingent on a task-relevant feature’s functional meaning during visual search, but top-down suppression operates at different levels of specificity depending on current task demands.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11093874/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140873538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantifying task-related gaze","authors":"Kerri Walter, Michelle Freeman, Peter Bex","doi":"10.3758/s13414-024-02883-w","DOIUrl":"10.3758/s13414-024-02883-w","url":null,"abstract":"<div><p>Competing theories attempt to explain what guides eye movements when exploring natural scenes: bottom-up image salience and top-down semantic salience. In one study, we apply language-based analyses to quantify the well-known observation that task influences gaze in natural scenes. Subjects viewed ten scenes as if they were performing one of two tasks. We found that the semantic similarity between the task and the labels of objects in the scenes captured the task-dependence of gaze (t(39) = 13.083; p < 0.001). In another study, we examined whether image salience or semantic salience better predicts gaze during a search task, and if viewing strategies are affected by searching for targets of high or low semantic relevance to the scene. Subjects searched 100 scenes for a high- or low-relevance object. We found that image salience becomes a worse predictor of gaze across successive fixations, while semantic salience remains a consistent predictor (X<sup>2</sup>(1, N=40) = 75.148, p < .001). Furthermore, we found that semantic salience decreased as object relevance decreased (t(39) = 2.304; p = .027). These results suggest that semantic salience is a useful predictor of gaze during task-related scene viewing, and that even in target-absent trials, gaze is modulated by the relevance of a search target to the scene in which it might be located.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02883-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140584690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tram Nguyen, Rebekka Lagacé-Cusiac, J. Celina Everling, Molly J. Henry, Jessica A. Grahn
{"title":"Audiovisual integration of rhythm in musicians and dancers","authors":"Tram Nguyen, Rebekka Lagacé-Cusiac, J. Celina Everling, Molly J. Henry, Jessica A. Grahn","doi":"10.3758/s13414-024-02874-x","DOIUrl":"10.3758/s13414-024-02874-x","url":null,"abstract":"<div><p>Music training is associated with better beat processing in the auditory modality. However, it is unknown how rhythmic training that emphasizes visual rhythms, such as dance training, might affect beat processing, nor whether training effects in general are modality specific. Here we examined how music and dance training interacted with modality during audiovisual integration and synchronization to auditory and visual isochronous sequences. In two experiments, musicians, dancers, and controls completed an audiovisual integration task and an audiovisual target-distractor synchronization task using dynamic visual stimuli (a bouncing figure). The groups performed similarly on the audiovisual integration tasks (Experiments 1 and 2). However, in the finger-tapping synchronization task (Experiment 1), musicians were more influenced by auditory distractors when synchronizing to visual sequences, while dancers were more influenced by visual distractors when synchronizing to auditory sequences. When participants synchronized with whole-body movements instead of finger-tapping (Experiment 2), all groups were more influenced by the visual distractor than the auditory distractor. Taken together, this study highlights how training is associated with audiovisual processing, and how different types of visual rhythmic stimuli and different movements alter beat perception and production outcome measures. Implications for the modality appropriateness hypothesis are discussed.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Min Quan Heo, Michael C. W. English, Murray T. Maybery, Troy A. W. Visser
{"title":"Visuospatial cueing differences as a function of autistic traits","authors":"Min Quan Heo, Michael C. W. English, Murray T. Maybery, Troy A. W. Visser","doi":"10.3758/s13414-024-02871-0","DOIUrl":"10.3758/s13414-024-02871-0","url":null,"abstract":"<div><p>Atypical orienting of visuospatial attention in autistic individuals or individuals with a high level of autistic-like traits (ALTs) has been well documented and viewed as a core feature underlying the development of autism. However, there has been limited testing of three alternative theoretical positions advanced to explain atypical orienting – difficulty in disengagement, cue indifference, and delay in orienting. Moreover, research commonly has not separated facilitation (reaction time difference between neutral and valid cues) and cost effects (reaction time difference between invalid and neutral cues) in orienting tasks. We addressed these limitations in two experiments that compared groups selected for Low- and High-ALT levels on exogenous and endogenous versions of the Posner cueing paradigm. Experiment 1 showed that High-ALT participants exhibited a significantly reduced cost effect compared to Low-ALT participants in the endogenous cueing task, although the overall orienting effect remained small. In Experiment 2, we increased task difficulty of the endogenous task to augment cueing effects. Results were comparable to Experiment 1 regarding the finding of a reduced cost effect for High-ALT participants on the endogenous cueing task and additionally demonstrated a reduced facilitation effect in High-ALT participants on the same task. No ALT group differences were observed on an exogenous cueing task included in Experiment 2. These findings suggest atypical orienting in High-ALT individuals may be attributable to general cue indifference, which implicates differences in top-down attentional processes between Low- and High-ALT individuals. We discuss how indifference to endogenous cues may contribute to social cognitive differences in autism.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11093807/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicole B. Massa, Nick Crotty, Ifat Levy, Michael A. Grubb
{"title":"Manipulating the reliability of target-color information modulates value-driven attentional capture","authors":"Nicole B. Massa, Nick Crotty, Ifat Levy, Michael A. Grubb","doi":"10.3758/s13414-024-02878-7","DOIUrl":"10.3758/s13414-024-02878-7","url":null,"abstract":"<div><p>Previously rewarded stimuli slow response times (RTs) during visual search, despite being physically non-salient and no longer task-relevant or rewarding. Such value-driven attentional capture (VDAC) has been measured in a training-test paradigm. In the training phase, the search target is rendered in one of two colors (one predicting high reward and the other low reward). In this study, we modified this traditional training phase to include pre-cues that signaled reliable or unreliable information about the trial-to-trial color of the training phase search target. Reliable pre-cues indicated the upcoming target color with certainty, whereas unreliable pre-cues indicated the target was equally likely to be one of two distinct colors. Thus reliable and unreliable pre-cues provided certain and uncertain information, respectively, about the magnitude of the upcoming reward. We then tested for VDAC in a traditional test phase. We found that unreliably pre-cued distractors slowed RTs and drew more initial eye movements during search for the test-phase target, relative to reliably pre-cued distractors, thus providing novel evidence for an influence of information reliability on attentional capture. That said, our experimental manipulation also eliminated <i>value-dependency</i> (i.e.<i>,</i> slowed RTs when a high-reward-predicting distractor was present relative to a low-reward-predicting distractor) for both kinds of distractors. Taken together, these results suggest that target-color uncertainty, rather than reward magnitude, played a critical role in modulating the allocation of value-driven attention in this study.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11093855/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140307934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Processing difficulty while reading words with neighbors is not due to increased foveal load: Evidence from eye movements","authors":"Rebecca L. Johnson, Timothy J. Slattery","doi":"10.3758/s13414-024-02880-z","DOIUrl":"10.3758/s13414-024-02880-z","url":null,"abstract":"<div><p>Words with high orthographic relatedness are termed “word neighbors” (<i>angle/angel</i>; <i>birch/birth</i>). Activation-based models of word recognition assume that lateral inhibition occurs between words and their activated neighbors. However, studies of eye movements during reading have not found inhibitory effects in early measures assumed to reflect lexical access (e.g., gaze duration). Instead, inhibition in eye-movement studies has been found in later measures of processing (e.g., total time, regressions in). We conducted an eye-movement boundary change study (Rayner, <i>Cognitive Psychology, 7</i>(1), 65-81, 1975) that manipulated the parafoveal preview of the word following the neighbor word (word N+1). In this way, we explored whether the late inhibitory effects seen with transposed letter words and words with higher-frequency neighbors result from reduced parafoveal preview due to increased foveal load and/or interference during late stages of lexical processing (the L2 stage within the E-Z Reader framework). For word N+1, while there were clear preview effects, there was not an effect of the neighborhood status of word N, nor a significant interaction. This suggests that the late inhibitory effects of earlier eye-movement studies are driven by misidentification of neighbor words rather than being due to increased foveal load.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140295430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}