VISUAL COGNITIONPub Date : 2023-01-02DOI: 10.1080/13506285.2023.2192992
J. Enns, Rachel C. Lin-Yang, Veronica Dudarev
{"title":"Moving to maintain perceptual and social constancy","authors":"J. Enns, Rachel C. Lin-Yang, Veronica Dudarev","doi":"10.1080/13506285.2023.2192992","DOIUrl":"https://doi.org/10.1080/13506285.2023.2192992","url":null,"abstract":"ABSTRACT Past research on object constancy has tended to treat the viewer as a passive observer. Here we examine viewers’ body and eye movements when they are asked to view photos of people in a gallery setting. Participants considered one individual in each photo, before indicating how socially connected they felt toward them and then moving to a spot in the gallery where they would be most comfortable when talking to them. Photographed individuals varied in their projected distance from the camera (near, far) and in their image resolution (sharp, slightly blurred). Results showed that participants looked more directly at near versus far individuals and at sharp versus blurred individuals. They also rated their social connection as stronger when the images were near versus far and sharp versus blurred. Where participants stood when making these ratings was strongly correlated with the projected distance of the images and with their ratings of social connection. These findings are discussed with regard to brain mechanisms for maintaining stability in our perceptions of geometric and social aspects of our world. They also highlight our inherent tendency to attribute qualities of our perceptual experiences to objects in that world.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"31 1","pages":"43 - 62"},"PeriodicalIF":2.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49250071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attention to the fine-grained aspect of words in the environment emerges in preschool children with high reading ability","authors":"Licheng Xue, Ying Xiao, Tianying Qing, U. Maurer, Wei Wang, Huidong Xue, X. Weng, Jing-Guo Zhao","doi":"10.1080/13506285.2023.2194697","DOIUrl":"https://doi.org/10.1080/13506285.2023.2194697","url":null,"abstract":"ABSTRACT Attention to words is closely related to the process of learning to read. However, it remains unclear how attention to words in environmental print (such as words on product labels) is changed with the growth of preschool children’s reading ability. We thus used eye tracking technique to compare attention to words in environmental print in children at low (32, 15 males, 5.12 years) and high (32, 17 males, 5.16 years) reading levels during a free viewing task. To characterize which aspects of visual word form children attend to, we constructed three types of stimuli embedded in the same context: words in environment print, symbol strings (similar shape to words but without strokes), and character strings (comparable with words in the number of strokes and the structures). We observed that children at both reading levels showed lower percentages of fixations and fixation time in words relative to symbol strings, suggesting they start to attend to the coarse aspect of visual word form. Interestingly, only children at higher reading level showed lower percentages of fixations and fixation time for words relative to character strings, suggesting that attention to the fine-grained aspect of visual word form emerged, and was closely to reading ability.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"31 1","pages":"85 - 96"},"PeriodicalIF":2.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45955011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-01-02DOI: 10.1080/13506285.2023.2188335
F. Doidy, P. Desaunay, C. Rebillard, P. Clochon, A. Lambrechts, P. Wantzen, F. Guénolé, J. Baleyte, F. Eustache, D. Bowler, K. Lebreton, B. Guillery-Girard
{"title":"How scene encoding affects memory discrimination: Analysing eye movements data using data driven methods","authors":"F. Doidy, P. Desaunay, C. Rebillard, P. Clochon, A. Lambrechts, P. Wantzen, F. Guénolé, J. Baleyte, F. Eustache, D. Bowler, K. Lebreton, B. Guillery-Girard","doi":"10.1080/13506285.2023.2188335","DOIUrl":"https://doi.org/10.1080/13506285.2023.2188335","url":null,"abstract":"ABSTRACT Encoding of visual scenes remains under-explored due to methodological limitations. In this study, we evaluated the relationship between memory accuracy for visual scenes and eye movements at encoding. First, we used data-driven methods, a fixation density map (using iMap4) and a saliency map (using GBVS), to analyse the visual attention for items. Second, and in a more novel way, we conducted scanpath analyses without a priori (using ScanMatch). Scene memory accuracy was assessed by asking participants to discriminate identical scenes (targets) among rearranged scenes sharing some items with targets (distractors) and new scenes. Shorter fixation duration in regions of interest (ROIs) at encoding was associated with a better rejection of distractors; there was no significant difference in the relative fixation time in ROIs at encoding, between subsequent hits and misses at test. Hence, density of eye fixations in data-driven ROIs seems to be a marker of subsequent memory discrimination and pattern separation. Interestingly, we also identified a negative correlation between average MultiDimensional Scaling (MDS) distance scanpaths and the correct rejection of distractors, indicating that scanpath consistency significantly affects the ability to discriminate distractors from targets. These data suggest that visual exploration at encoding participates in discrimination processes at test.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"31 1","pages":"1 - 17"},"PeriodicalIF":2.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42225709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-01-01Epub Date: 2023-04-04DOI: 10.1080/13506285.2023.2192991
Mar S Nikiforova, Rosemary A Cowell, David E Huber
{"title":"Gestalt formation promotes awareness of suppressed visual stimuli during binocular rivalry.","authors":"Mar S Nikiforova, Rosemary A Cowell, David E Huber","doi":"10.1080/13506285.2023.2192991","DOIUrl":"10.1080/13506285.2023.2192991","url":null,"abstract":"<p><p>Continuous flash suppression leverages binocular rivalry to render observers unaware of a static image for several seconds. To achieve this effect, rapidly flashing noise masks are presented to the dominant eye while a static stimulus is presented to the non-dominant eye. Eventually \"breakthrough\" occurs, wherein awareness shifts to the static image shown to the non-dominant eye. We tested the hypothesis that Gestalt formation can promote breakthrough. In two experiments, we presented pacman-shaped objects that might or might not align to form illusory Kanizsa objects. To measure the inception of breakthrough, observers were instructed to press a key at the moment of partial breakthrough. After pressing the key, which stopped the trial, observers reported how many pacmen were seen and where they were located. Supporting the Gestalt hypothesis, breakthrough was faster when the pacmen were aligned and observers more often reported pairs of pacmen if they were aligned. To address whether these effects reflected illusory shape perception, a computational model was applied to the pacman report distributions and breakthrough times for an experiment with four pacmen. A full account of the data required an increased joint probability of reporting all four pacmen, suggesting an influence of a perceived illusory cross.</p>","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"31 1","pages":"18-42"},"PeriodicalIF":2.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10721231/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48256052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2022-11-26DOI: 10.1080/13506285.2023.2169802
Lars-Michael Schöpper, C. Frings
{"title":"Inhibition of return (IOR) meets stimulus-response (S-R) binding: Manually responding to central arrow targets is driven by S-R binding, not IOR","authors":"Lars-Michael Schöpper, C. Frings","doi":"10.1080/13506285.2023.2169802","DOIUrl":"https://doi.org/10.1080/13506285.2023.2169802","url":null,"abstract":"ABSTRACT Localizing targets repeating or changing their position typically leads to a benefit for location changes, that is, inhibition of return (IOR). Yet, IOR is mostly absent when sequentially responding to arrows pointing to the left or right. Previous research suggested that responding to central arrow targets resembles a discrimination response. For the latter, action control theories expect the modulation of response repetitions and changes by task-irrelevant feature repetitions and changes (e.g., colour), caused by stimulus-response (S-R) binding – a modulation typically absent in localization performance. In the current study, participants gave left and right responses to peripheral targets repeating or changing their position, and to central arrow targets repeating or changing their pointing direction. Targets could repeat or change their colour. For central targets, responses were heavily modulated by colour repetitions and changes, suggesting S-R binding. No S-R binding, but only IOR was found for peripheral targets. Analysis of reaction time percentiles suggested that this pattern was not caused by fast response execution. These results show that S-R binding approaches allow to explain effects typically discussed in the context of attentional orienting, highlighting the similarities of two research strands working in parallel for years without much of exchange.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"30 1","pages":"641 - 658"},"PeriodicalIF":2.0,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42778500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2022-11-26DOI: 10.1080/13506285.2023.2186997
Yasmine Giovaola, Viviana Rojo Martinez, S. Ionta
{"title":"Degraded vision affects mental representations of the body","authors":"Yasmine Giovaola, Viviana Rojo Martinez, S. Ionta","doi":"10.1080/13506285.2023.2186997","DOIUrl":"https://doi.org/10.1080/13506285.2023.2186997","url":null,"abstract":"ABSTRACT The mental representations of the body depend on current perceptions, building on more reliable sensory inputs and decreasing the weight of less reliable afferences. While somatosensory manipulations have been repeatedly investigated, less is known about vision. We hypothesized that a decrease in visual input may result in an augmented relevance of somatosensation to mentally represent the body. 29 neurotypical participants performed mental rotation of hand images, while image visibility was manipulated: keeping the same background (grey), the contrast was decreased by 60% (Degraded Vision) with respect to Baseline. Results showed that Degraded Vision (1) slowed down the mental rotation of hand images typically sensitive to degrees of rotation (dorsum), and (2) established a rotation-dependent latency profile for the mental rotation of hand images that are not typically affected by rotation (little finger). Since the sensitivity to rotation indicates the recruitment of visual or somatosensory strategies to mentally represent the body, our findings indicate that in presence of degraded visual input, somatosensation had a heavier weight than vision in mental rotation. This suggests a relative shift from a pictorial representation of the body (body image) to a somatosensory one (body schema) as a function of the most reliable/available sensory input.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"30 1","pages":"686 - 695"},"PeriodicalIF":2.0,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59827997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2022-11-26DOI: 10.1080/13506285.2023.2175945
Molly R. McKinney, Heather A. Hansen, Jessica L. Irons, Andrew B. Leber
{"title":"Attentional strategy choice is not predicted by cognitive ability or academic performance","authors":"Molly R. McKinney, Heather A. Hansen, Jessica L. Irons, Andrew B. Leber","doi":"10.1080/13506285.2023.2175945","DOIUrl":"https://doi.org/10.1080/13506285.2023.2175945","url":null,"abstract":"ABSTRACT People exhibit vast individual variation in the degree to which they choose optimal attentional control strategies during visual search, although it is not well understood what predicts such variation. In the present study, we sought to determine whether markers of real-world achievement (assessed via undergraduate GPA) and cognitive ability (e.g., general fluid intelligence) could predict attentional strategy optimization (assessed via the Adaptive Choice Visual Search task; [Irons, J. L., & Leber, A. B. (2018). Characterizing individual variation in the strategic use of attentional control. Journal of Experimental Psychology: Human Perception and Performance, 44(10), 1637–1654]). Results showed that, while general cognitive ability predicted visual search response time and accuracy, neither achievement nor cognitive ability metrics could predict attentional strategy optimization. Thus, the determinants of attentional strategy remain elusive, and we discuss potential steps to shed light on this important research topic.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"30 1","pages":"671 - 679"},"PeriodicalIF":2.0,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49112499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2022-11-26DOI: 10.1080/13506285.2023.2188336
Paula Soballa, Lars-Michael Schöpper, C. Frings, Simon Merz
{"title":"Spatial biases in inhibition of return","authors":"Paula Soballa, Lars-Michael Schöpper, C. Frings, Simon Merz","doi":"10.1080/13506285.2023.2188336","DOIUrl":"https://doi.org/10.1080/13506285.2023.2188336","url":null,"abstract":"ABSTRACT Inhibition of return (IOR) describes the phenomenon that reaction times (RT) to a target which appears at a previously cued location are slowed down. Spalek and Hammad ([2004]. Supporting the attentional momentum view of IOR: Is attention biased to go right? Perception & Psychophysics, 66(2), 219–233. https://doi.org/10.3758/BF03194874) reported that IOR effects were smaller at a lower or right location, compared to an upper or left location. In contrast, Snyder and Schmidt ([2014]. No evidence for directional biases in inhibition of return. Psychonomic Bulletin & Review, 21(2), 432–435. https://doi.org/10.3758/s13423-013-0511-3) argued that IOR is unaffected by spatial biases and that any observed differences are better explained by general reaction time differences depending on the target’s location. In two experiments (both N = 31), we aimed to test both diverging predictions by presenting cue and target at four locations along the vertical and horizontal axis. Controlling for a main effect of RTs at different target locations, we still observed a spatial bias on IOR, in that the effect was smaller at the lower than the upper target location. We also found a comparable spatial bias on the IOR-related phenomenon of early facilitation (EF). The results suggest that the magnitude and occurrence of both IOR and EF are affected by spatial configurations. Similarities with spatial biases on other visual phenomena as well as theoretical implications are discussed.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"30 1","pages":"696 - 715"},"PeriodicalIF":2.0,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48982618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2022-11-26DOI: 10.1080/13506285.2023.2184894
K. Ritchie, Tessa R. Flack, L. Maréchal
{"title":"Unfamiliar faces might as well be another species: Evidence from a face matching task with human and monkey faces","authors":"K. Ritchie, Tessa R. Flack, L. Maréchal","doi":"10.1080/13506285.2023.2184894","DOIUrl":"https://doi.org/10.1080/13506285.2023.2184894","url":null,"abstract":"ABSTRACT Humans are good at recognizing familiar faces, but are more error-prone at recognizing an unfamiliar person across different images. It has been suggested that familiar and unfamiliar faces are processed qualitatively differently. But are unfamiliar faces at least processed differently from monkey faces? Here we tested 366 volunteers on a face matching test – two images presented side-by-side with participants judging whether the images show the same identity or two different identities – comparing performance with familiar and unfamiliar human faces, and monkey faces. The results showed that performance was most accurate for familiar faces, and was above chance for monkey faces. Although accuracy was higher for unfamiliar humans than monkeys on different identity trials, there was no unfamiliar human advantage over monkeys on same identity trials. The results give new insights into unfamiliar face processing, showing that in some ways unfamiliar faces might as well be another species.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"30 1","pages":"680 - 685"},"PeriodicalIF":2.0,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44554706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2022-11-26DOI: 10.1080/13506285.2023.2174232
Wanyi Guan, Binglong Li, J. Qian
{"title":"Time course of encoding and maintenance of stereoscopically induced size–distance scaling","authors":"Wanyi Guan, Binglong Li, J. Qian","doi":"10.1080/13506285.2023.2174232","DOIUrl":"https://doi.org/10.1080/13506285.2023.2174232","url":null,"abstract":"ABSTRACT The mechanism of size constancy assures that an object is perceived to be constant in size despite that its retinal size varies with viewing distance. Conversely, an object can be perceived as illusorily larger if the perceived distance becomes greater, due to the size–distance scaling mechanism. The present study aimed at exploring how size–distance scaling is modulated by the encoding duration and how its memory is affected by the retention duration. In Experiment 1, we presented two stimuli simultaneously at two stereoscopic depth planes and manipulated the presentation duration, and found that the magnitude of the size scaling increased with presentation duration. In Experiment 2, we examined the maintenance of size–distance scaling when component stimulus was kept in working memory with variable delays. The results showed that the size scaling was reliably retrieved from working memory if there was no disparity manipulation on the to-be-memorized item, but it decreased with retention if a disparity was applied to the to-be-memorized item. The findings suggest that although the post-scaling size can be stored in working memory, the scaling mechanism may still be in effect when there were conflicts in the oculomotor cues and disparity cues that produces depth perception.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"30 1","pages":"659 - 670"},"PeriodicalIF":2.0,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47614654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}