Nuno Alexandre De Sá Teixeira, Rodrigo Ribeiro Freitas, Samuel Silva, Tiago Taliscas, Pedro Mateus, Afonso Gomes, João Lima
{"title":"Representational horizon and visual space orientation: An investigation into the role of visual contextual cues on spatial mislocalisations.","authors":"Nuno Alexandre De Sá Teixeira, Rodrigo Ribeiro Freitas, Samuel Silva, Tiago Taliscas, Pedro Mateus, Afonso Gomes, João Lima","doi":"10.3758/s13414-023-02783-5","DOIUrl":"10.3758/s13414-023-02783-5","url":null,"abstract":"<p><p>The perceived offset position of a moving target has been found to be displaced forward, in the direction of motion (Representational Momentum; RM), downward, in the direction of gravity (Representational Gravity; RG), and, recently, further displaced along the horizon implied by the visual context (Representational Horizon; RH). The latter, while still underexplored, offers the prospect to clarify the role of visual contextual cues in spatial orientation and in the perception of dynamic events. As such, the present work sets forth to ascertain the robustness of Representational Horizon across varying types of visual contexts, particularly between interior and exterior scenes, and to clarify to what degree it reflects a perceptual or response phenomenon. To that end, participants were shown targets, moving along one out of several possible trajectories, overlaid on a randomly chosen background depicting either an interior or exterior scene rotated -22.5º, 0º, or 22.5º in relation to the actual vertical. Upon the vanishing of the target, participants were required to indicate its last seen location with a computer mouse. For half the participants, the background vanished with the target while for the remaining it was kept visible until a response was provided. Spatial localisations were subjected to a discrete Fourier decomposition procedure to obtain independent estimates of RM, RG, and RH. Outcomes showed that RH's direction was biased towards the horizon implied by the visual context, but solely for exterior scenes, and irrespective of its presence or absence during the spatial localisation response, supporting its perceptual/representational nature.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11093852/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41152107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"When remembering less is more: Unfiltered items are associated with reduced memory fidelity in visual short-term memory.","authors":"Young Seon Shin, Summer L Sheremata","doi":"10.3758/s13414-024-02891-w","DOIUrl":"10.3758/s13414-024-02891-w","url":null,"abstract":"<p><p>Visual short-term memory (VSTM), the ability to store information no longer visible, is essential for human behavior. VSTM limits vary across the population and are correlated with overall cognitive ability. It has been proposed that low-memory individuals are unable to select only relevant items for storage and that these limitations are greatest when memory demands are high. However, it is unknown whether these effects simply reflect task difficulty and whether they impact the quality of memory representations. Here we varied the number of items presented, or set size, to investigate the effect of memory demands on the performance of visual short-term memory across low- and high-memory groups. Group differences emerged as set size exceeded memory limits, even when task difficulty was controlled. In a change-detection task, the low-memory group performed more poorly when set size exceeded their memory limits. We then predicted that low-memory individuals encoding items beyond measured memory limits would result in the degraded fidelity of memory representations. A continuous report task confirmed that low, but not high, memory individuals demonstrated decreased memory fidelity as set size exceeded measured memory limits. The current study demonstrates that items held in VSTM are stored distinctly across groups and task demands. These results link the ability to maintain high quality representations with overall cognitive ability.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140860368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A spatial version of the Stroop task for examining proactive and reactive control independently from non-conflict processes","authors":"Giacomo Spinelli, Stephen J. Lupker","doi":"10.3758/s13414-024-02892-9","DOIUrl":"https://doi.org/10.3758/s13414-024-02892-9","url":null,"abstract":"<p>Conflict-induced control refers to humans’ ability to regulate attention in the processing of target information (e.g., the color of a word in the color-word Stroop task) based on experience with conflict created by distracting information (e.g., an incongruent color word), and to do so either in a proactive (preparatory) or a reactive (stimulus-driven) fashion. Interest in conflict-induced control has grown recently, as has the awareness that effects attributed to those processes might be affected by conflict-unrelated processes (e.g., the learning of stimulus-response associations). This awareness has resulted in the recommendation to move away from traditional interference paradigms with small stimulus/response sets and towards paradigms with larger sets (at least four targets, distractors, and responses), paradigms that allow better control of non-conflict processes. Using larger sets, however, is not always feasible. Doing so in the Stroop task, for example, would require either multiple arbitrary responses that are difficult for participants to learn (e.g., manual responses to colors) or non-arbitrary responses that can be difficult for researchers to collect (e.g., vocal responses in online experiments). Here, we present a spatial version of the Stroop task that solves many of those problems. In this task, participants respond to one of six directions indicated by an arrow, each requiring a specific, non-arbitrary manual response, while ignoring the location where the arrow is displayed. We illustrate the usefulness of this task by showing the results of two experiments in which evidence for proactive and reactive control was obtained while controlling for the impact of non-conflict processes.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140834076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyu Zhang, Jessica L. Irons, Heather A. Hansen, Andrew B. Leber
{"title":"Joint contributions of preview and task instructions on visual search strategy selection","authors":"Tianyu Zhang, Jessica L. Irons, Heather A. Hansen, Andrew B. Leber","doi":"10.3758/s13414-024-02870-1","DOIUrl":"https://doi.org/10.3758/s13414-024-02870-1","url":null,"abstract":"<p>People tend to employ suboptimal attention control strategies during visual search. Here we question why people are suboptimal, specifically investigating how knowledge of the optimal strategies and the time available to apply such strategies affect strategy use. We used the Adaptive Choice Visual Search (ACVS), a task designed to assess attentional control optimality. We used explicit strategy instructions to manipulate explicit strategy knowledge, and we used display previews to manipulate time to apply the strategies. In the first two experiments, the strategy instructions increased optimality. However, the preview manipulation did not significantly boost optimality for participants who did not receive strategy instruction. Finally, in Experiments 3A and 3B, we jointly manipulated preview and instruction with a larger sample size. Preview and instruction both produced significant main effects; furthermore, they interacted significantly, such that the beneficial effect of instructions emerged with greater preview time. Taken together, these results have important implications for understanding the strategic use of attentional control. Individuals with explicit knowledge of the optimal strategy are more likely to exploit relevant information in their visual environment, but only to the extent that they have the time to do so.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140804384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Torin K. Clark, Raquel C. Galvan-Garza, Daniel M. Merfeld
{"title":"Intra-individual consistency of vestibular perceptual thresholds","authors":"Torin K. Clark, Raquel C. Galvan-Garza, Daniel M. Merfeld","doi":"10.3758/s13414-024-02886-7","DOIUrl":"https://doi.org/10.3758/s13414-024-02886-7","url":null,"abstract":"<p>Vestibular perceptual thresholds quantify sensory noise associated with reliable perception of small self-motions. Previous studies have identified substantial variation between even healthy individuals’ thresholds. However, it remains unclear if or how an individual’s vestibular threshold varies over repeated measures across various time scales (repeated measurements on the same day, across days, weeks, or months). Here, we assessed yaw rotation and roll tilt thresholds in four individuals and compared this intra-individual variability to inter-individual variability of thresholds measured across a large age-matched cohort each measured only once. For analysis, we performed simulations of threshold measurements where there was no underlying variability (or it was manipulated) to compare to that observed empirically. We found remarkable consistency in vestibular thresholds within individuals, for both yaw rotation and roll tilt; this contrasts with substantial inter-individual differences. Thus, we conclude that vestibular perceptual thresholds are an innate characteristic, which validates pooling measures across sessions and potentially serves as a stable clinical diagnostic and/or biomarker.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140804251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Numerical values modulate size perception","authors":"Aviv Avitan, Dror Marom, Avishai Henik","doi":"10.3758/s13414-024-02875-w","DOIUrl":"https://doi.org/10.3758/s13414-024-02875-w","url":null,"abstract":"<p>The link between various codes of magnitude and their interactions has been studied extensively for many years. In the current study, we examined how the physical and numerical magnitudes of digits are mapped into a combined mental representation. In two psychophysical experiments, participants reported the physically larger digit among two digits. In the identical condition, participants compared digits of an identical value (e.g., “2” and “2”); in the different condition, participants compared digits of distinct numerical values (i.e., “2” and “5”). As anticipated, participants overestimated the physical size of a numerically larger digit and underestimated the physical size of a numerically smaller digit. Our results extend the shared-representation account of physical and numerical magnitudes.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140624507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantifying task-related gaze","authors":"Kerri Walter, Michelle Freeman, Peter Bex","doi":"10.3758/s13414-024-02883-w","DOIUrl":"https://doi.org/10.3758/s13414-024-02883-w","url":null,"abstract":"<p>Competing theories attempt to explain what guides eye movements when exploring natural scenes: bottom-up image salience and top-down semantic salience. In one study, we apply language-based analyses to quantify the well-known observation that task influences gaze in natural scenes. Subjects viewed ten scenes as if they were performing one of two tasks. We found that the semantic similarity between the task and the labels of objects in the scenes captured the task-dependence of gaze (t(39) = 13.083; p < 0.001). In another study, we examined whether image salience or semantic salience better predicts gaze during a search task, and if viewing strategies are affected by searching for targets of high or low semantic relevance to the scene. Subjects searched 100 scenes for a high- or low-relevance object. We found that image salience becomes a worse predictor of gaze across successive fixations, while semantic salience remains a consistent predictor (X<sup>2</sup>(1, N=40) = 75.148, p < .001). Furthermore, we found that semantic salience decreased as object relevance decreased (t(39) = 2.304; p = .027). These results suggest that semantic salience is a useful predictor of gaze during task-related scene viewing, and that even in target-absent trials, gaze is modulated by the relevance of a search target to the scene in which it might be located.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140584690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michelle K Huntley, An Nguyen, Matthew A Albrecht, Welber Marinovic
{"title":"Tactile cues are more intrinsically linked to motor timing than visual cues in visual-tactile sensorimotor synchronization.","authors":"Michelle K Huntley, An Nguyen, Matthew A Albrecht, Welber Marinovic","doi":"10.3758/s13414-023-02828-9","DOIUrl":"10.3758/s13414-023-02828-9","url":null,"abstract":"<p><p>Many tasks require precise synchronization with external sensory stimuli, such as driving a car. This study investigates whether combined visual-tactile information provides additional benefits to movement synchrony over separate visual and tactile stimuli and explores the relationship with the temporal binding window for multisensory integration. In Experiment 1, participants completed a sensorimotor synchronization task to examine movement variability and a simultaneity judgment task to measure the temporal binding window. Results showed similar synchronization variability between visual-tactile and tactile-only stimuli, but significantly lower than visual only. In Experiment 2, participants completed a visual-tactile sensorimotor synchronization task with cross-modal stimuli presented inside (stimulus onset asynchrony 80 ms) and outside (stimulus-onset asynchrony 400 ms) the temporal binding window to examine temporal accuracy of movement execution. Participants synchronized their movement with the first stimulus in the cross-modal pair, either the visual or tactile stimulus. Results showed significantly greater temporal accuracy when only one stimulus was presented inside the window and the second stimulus was outside the window than when both stimuli were presented inside the window, with movement execution being more accurate when attending to the tactile stimulus. Overall, these findings indicate there may be a modality-specific benefit to sensorimotor synchronization performance, such that tactile cues are weighted more strongly than visual information as tactile information is more intrinsically linked to motor timing than visual information. Further, our findings indicate that the visual-tactile temporal binding window is related to the temporal accuracy of movement execution.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11062975/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139542594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Phasic alerting in visual search tasks.","authors":"Niklas Dietze, Christian H Poth","doi":"10.3758/s13414-024-02844-3","DOIUrl":"10.3758/s13414-024-02844-3","url":null,"abstract":"<p><p>Many tasks require one to search for and find important objects in the visual environment. Visual search is strongly supported by cues indicating target objects to mechanisms of selective attention, which enable one to prioritise targets and ignore distractor objects. Besides selective attention, a major influence on performance across cognitive tasks is phasic alertness, a temporary increase of arousal induced by warning stimuli (alerting cues). Alerting cues provide no specific information on whose basis selective attention could be deployed, but have nevertheless been found to speed up perception and simple actions. It is still unclear, however, how alerting affects visual search. Therefore, in the present study, participants performed a visual search task with and without preceding visual alerting cues. Participants had to report the orientation of a target among several distractors. The target saliency was low in Experiment 1 and high in Experiment 2. In both experiments, we found that visual search was faster when a visual alerting cue was presented before the target display. Performance benefits occurred irrespective of how many distractors had been presented along with the target. Taken together, the findings reveal that visual alerting supports visual search independently of the complexity of the search process and the demands for selective attention.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11062964/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139492418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dual-task interference: Bottleneck constraint or capacity sharing? Evidence from automatic and controlled processes.","authors":"Yanwen Wu, Qiangqiang Wang","doi":"10.3758/s13414-024-02854-1","DOIUrl":"10.3758/s13414-024-02854-1","url":null,"abstract":"<p><p>This study investigated whether the interference between two tasks in dual-task processing stems from bottleneck limitations or insufficient cognitive resources due to resource sharing. Experiment 1 used tone discrimination as Task 1 and word or pseudoword classification as Task 2 to evaluate the effect of automatic versus controlled processing on dual-task interference under different SOA conditions. Experiment 2 reversed the task order. The results showed that dual-task interference persisted regardless of task type or order. Neither experiment found evidence that automatic tasks could eliminate interference. This suggests that resource limitations, rather than bottlenecks, may better explain dual-task costs. Specifically, when tasks compete for limited resources, the processing efficiency of both tasks is significantly reduced. Future research should explore how cognitive resources are dynamically allocated between tasks to better account for dual-task interference effects.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139991894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}