{"title":"Task-Irrelevant Features in Visual Working Memory Influence Covert Attention: Evidence from a Partial Report Task.","authors":"Rebecca M Foerster, Werner X Schneider","doi":"10.3390/vision3030042","DOIUrl":"https://doi.org/10.3390/vision3030042","url":null,"abstract":"<p><p>Selecting a target based on a representation in visual working memory (VWM) affords biasing covert attention towards objects with memory-matching features. Recently, we showed that even task-irrelevant features of a VWM template bias attention. Specifically, when participants had to saccade to a cued shape, distractors sharing the cue's search-irrelevant color captured the eyes. While a saccade always aims at one target location, multiple locations can be attended covertly. Here, we investigated whether covert attention is captured similarly as the eyes. In our partial report task, each trial started with a shape-defined search cue, followed by a fixation cross. Next, two colored shapes, each including a letter, appeared left and right from fixation, followed by masks. The letter inside that shape matching the preceding cue had to be reported. In Experiment 1, either target, distractor, both, or no object matched the cue's irrelevant color. Target-letter reports were most frequent in target-match trials and least frequent in distractor-match trials. Irrelevant cue and target color never matched in Experiment 2. Still, participants reported the distractor more often to the target's disadvantage, when cue and distractor color matched. Thus, irrelevant features of a VWM template can influence covert attention in an involuntarily object-based manner when searching for trial-wise varying targets.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3030042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frederic Göhringer, Miriam Löhr-Limpens, Constanze Hesse, Thomas Schenk
{"title":"Grasping Discriminates between Object Sizes Less Not More Accurately than the Perceptual System.","authors":"Frederic Göhringer, Miriam Löhr-Limpens, Constanze Hesse, Thomas Schenk","doi":"10.3390/vision3030036","DOIUrl":"https://doi.org/10.3390/vision3030036","url":null,"abstract":"<p><p>Ganel, Freud, Chajut, and Algom (2012) demonstrated that maximum grip apertures (MGAs) differ significantly when grasping perceptually identical objects. From this finding they concluded that the visual size information used by the motor system is more accurate than the visual size information available to the perceptual system. A direct comparison between the accuracy in the perception and the action system is, however, problematic, given that accuracy in the perceptual task is measured using a dichotomous variable, while accuracy in the visuomotor task is determined using a continuous variable. We addressed this problem by dichotomizing the visuomotor measures. Using this approach, our results show that size discrimination in grasping is in fact inferior to perceptual discrimination therefore contradicting the original suggestion put forward by Ganel and colleagues.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3030036","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Changing Landscape: High-Level Influences on Eye Movement Guidance in Scenes.","authors":"Carrick C Williams, Monica S Castelhano","doi":"10.3390/vision3030033","DOIUrl":"https://doi.org/10.3390/vision3030033","url":null,"abstract":"<p><p>The use of eye movements to explore scene processing has exploded over the last decade. Eye movements provide distinct advantages when examining scene processing because they are both fast and spatially measurable. By using eye movements, researchers have investigated many questions about scene processing. Our review will focus on research performed in the last decade examining: (1) attention and eye movements; (2) where you look; (3) influence of task; (4) memory and scene representations; and (5) dynamic scenes and eye movements. Although typically addressed as separate issues, we argue that these distinctions are now holding back research progress. Instead, it is time to examine the intersections of these seemingly separate influences and examine the intersectionality of how these influences interact to more completely understand what eye movements can tell us about scene processing.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3030033","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Austin J Hurst, Michael A Lawrence, Raymond M Klein
{"title":"How Does Spatial Attention Influence the Probability and Fidelity of Colour Perception?","authors":"Austin J Hurst, Michael A Lawrence, Raymond M Klein","doi":"10.3390/vision3020031","DOIUrl":"https://doi.org/10.3390/vision3020031","url":null,"abstract":"<p><p>Existing research has found that spatial attention alters how various stimulus properties are perceived (e.g., luminance, saturation), but few have explored whether it improves the accuracy of perception. To address this question, we performed two experiments using modified Posner cueing tasks, wherein participants made speeded detection responses to peripheral colour targets and then indicated their perceived colours on a colour wheel. In E1, cues were central and endogenous (i.e., prompted voluntary attention) and the interval between cues and targets (stimulus onset asynchrony, or SOA) was always 800 ms. In E2, cues were peripheral and exogenous (i.e., captured attention involuntarily) and the SOA varied between short (100 ms) and long (800 ms). A Bayesian mixed-model analysis was used to isolate the effects of attention on the probability and the fidelity of colour encoding. Both endogenous and short-SOA exogenous spatial cueing improved the probability of encoding the colour of targets. Improved fidelity of encoding was observed in the endogenous but not in the exogenous cueing paradigm. With exogenous cues, inhibition of return (IOR) was observed in both RT and probability at the long SOA. Overall, our findings reinforce the utility of continuous response variables in the research of attention.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contextually-Based Social Attention Diverges across Covert and Overt Measures.","authors":"Effie J Pereira, Elina Birmingham, Jelena Ristic","doi":"10.3390/vision3020029","DOIUrl":"https://doi.org/10.3390/vision3020029","url":null,"abstract":"<p><p>Humans spontaneously attend to social cues like faces and eyes. However, recent data show that this behavior is significantly weakened when visual content, such as luminance and configuration of internal features, as well as visual context, such as background and facial expression, are controlled. Here, we investigated attentional biasing elicited in response to information presented within appropriate background contexts. Using a dot-probe task, participants were presented with a face-house cue pair, with a person sitting in a room and a house positioned within a picture hanging on a wall. A response target occurred at the previous location of the eyes, mouth, top of the house, or bottom of the house. Experiment 1 measured covert attention by assessing manual responses while participants maintained central fixation. Experiment 2 measured overt attention by assessing eye movements using an eye tracker. The data from both experiments indicated no evidence of spontaneous attentional biasing towards faces or facial features in manual responses; however, an infrequent, though reliable, overt bias towards the eyes of faces emerged. Together, these findings suggest that contextually-based social information does not determine spontaneous social attentional biasing in manual measures, although it may act to facilitate oculomotor behavior.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020029","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Navid Mohaghegh, Ebrahim Ghafar-Zadeh, Sebastian Magierowski
{"title":"Recent Advances of Computerized Graphical Methods for the Detection and Progress Assessment of Visual Distortion Caused by Macular Disorders.","authors":"Navid Mohaghegh, Ebrahim Ghafar-Zadeh, Sebastian Magierowski","doi":"10.3390/vision3020025","DOIUrl":"https://doi.org/10.3390/vision3020025","url":null,"abstract":"<p><p>Recent advances of computerized graphical methods have received significant attention for detection and home monitoring of various visual distortions caused by macular disorders such as macular edema, central serous chorioretinopathy, and age-related macular degeneration. After a brief review of macular disorders and their conventional diagnostic methods, this paper reviews such graphical interface methods including computerized Amsler Grid, Preferential Hyperacuity Perimeter, and Three-dimensional Computer-automated Threshold Amsler Grid. Thereafter, the challenges of these computerized methods for accurate and rapid detection of macular disorders are discussed. The early detection and progress assessment of macular disorders can significantly enhance the required clinical procedure for the diagnosis and treatment of macular disorders.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Changing Role of Phonology in Reading Development.","authors":"Sara V Milledge, Hazel I Blythe","doi":"10.3390/vision3020023","DOIUrl":"https://doi.org/10.3390/vision3020023","url":null,"abstract":"<p><p>Processing of both a word's orthography (its printed form) and phonology (its associated speech sounds) are critical for lexical identification during reading, both in beginning and skilled readers. Theories of learning to read typically posit a developmental change, from early readers' reliance on phonology to more skilled readers' development of direct orthographic-semantic links. Specifically, in becoming a skilled reader, the extent to which an individual processes phonology during lexical identification is thought to decrease. Recent data from eye movement research suggests, however, that the developmental change in phonological processing is somewhat more nuanced than this. Such studies show that phonology influences lexical identification in beginning and skilled readers in both typically and atypically developing populations. These data indicate, therefore, that the developmental change might better be characterised as a transition from overt decoding to abstract, covert recoding. We do not stop processing phonology as we become more skilled at reading; rather, the nature of that processing changes.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What Can Eye Movements Tell Us about Subtle Cognitive Processing Differences in Autism?","authors":"Philippa L Howard, Li Zhang, Valerie Benson","doi":"10.3390/vision3020022","DOIUrl":"https://doi.org/10.3390/vision3020022","url":null,"abstract":"<p><p>Autism spectrum disorder (ASD) is neurodevelopmental condition principally characterised by impairments in social interaction and communication, and repetitive behaviours and interests. This article reviews the eye movement studies designed to investigate the underlying sampling or processing differences that might account for the principal characteristics of autism. Following a brief summary of a previous review chapter by one of the authors of the current paper, a detailed review of eye movement studies investigating various aspects of processing in autism over the last decade will be presented. The literature will be organised into sections covering different cognitive components, including language and social communication and interaction studies. The aim of the review will be to show how eye movement studies provide a very useful on-line processing measure, allowing us to account for observed differences in behavioural data (accuracy and reaction times). The subtle processing differences that eye movement data reveal in both language and social processing have the potential to impact in the everyday communication domain in autism.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye Movements Actively Reinstate Spatiotemporal Mnemonic Content.","authors":"Jordana S Wynn, Kelly Shen, Jennifer D Ryan","doi":"10.3390/vision3020021","DOIUrl":"https://doi.org/10.3390/vision3020021","url":null,"abstract":"<p><p>Eye movements support memory encoding by binding distinct elements of the visual world into coherent representations. However, the role of eye movements in memory retrieval is less clear. We propose that eye movements play a functional role in retrieval by reinstating the encoding context. By overtly shifting attention in a manner that broadly recapitulates the spatial locations and temporal order of encoded content, eye movements facilitate access to, and reactivation of, associated details. Such mnemonic gaze reinstatement may be obligatorily recruited when task demands exceed cognitive resources, as is often observed in older adults. We review research linking gaze reinstatement to retrieval, describe the neural integration between the oculomotor and memory systems, and discuss implications for models of oculomotor control, memory, and aging.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John M Henderson, Taylor R Hayes, Candace E Peacock, Gwendolyn Rehrig
{"title":"Meaning and Attentional Guidance in Scenes: A Review of the Meaning Map Approach.","authors":"John M Henderson, Taylor R Hayes, Candace E Peacock, Gwendolyn Rehrig","doi":"10.3390/vision3020019","DOIUrl":"https://doi.org/10.3390/vision3020019","url":null,"abstract":"<p><p>Perception of a complex visual scene requires that important regions be prioritized and attentionally selected for processing. What is the basis for this selection? Although much research has focused on image salience as an important factor guiding attention, relatively little work has focused on semantic salience. To address this imbalance, we have recently developed a new method for measuring, representing, and evaluating the role of meaning in scenes. In this method, the spatial distribution of semantic features in a scene is represented as a meaning map. Meaning maps are generated from crowd-sourced responses given by naïve subjects who rate the meaningfulness of a large number of scene patches drawn from each scene. Meaning maps are coded in the same format as traditional image saliency maps, and therefore both types of maps can be directly evaluated against each other and against maps of the spatial distribution of attention derived from viewers' eye fixations. In this review we describe our work focusing on comparing the influences of meaning and image salience on attentional guidance in real-world scenes across a variety of viewing tasks that we have investigated, including memorization, aesthetic judgment, scene description, and saliency search and judgment. Overall, we have found that both meaning and salience predict the spatial distribution of attention in a scene, but that when the correlation between meaning and salience is statistically controlled, only meaning uniquely accounts for variance in attention.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}