VISUAL COGNITIONPub Date : 2023-02-07DOI: 10.1080/13506285.2023.2213904
O. Jacobs, F. Pazhoohi, A. Kingstone
{"title":"Contrapposto posture captures visual attention: An online gaze tracking experiment","authors":"O. Jacobs, F. Pazhoohi, A. Kingstone","doi":"10.1080/13506285.2023.2213904","DOIUrl":"https://doi.org/10.1080/13506285.2023.2213904","url":null,"abstract":"ABSTRACT Goddesses of love and beauty are frequently depicted in artwork in a contrapposto posture with one leg relaxing while the other bears the weight. Previous research has indicated that compared to an upright standing pose, a contrapposto pose is considered more attractive with its curviness capturing greater visual attention. Yet, whether a body posed in contrapposto is generally more visually attention-grabbing than an upright body remains unknown. We sought to address this gap and also examined if individual differences in sociosexuality – individual differences in willingness to engage in uncommitted sexual relations – influence attentional allocation. Online gaze-tracking was employed to monitor subjects (n = 71) during image presentation in a preferential looking design (contrapposto verse standing). Participants had a greater proportion of their gaze directed towards female bodies depicted in contrapposto pose compared to a standing posture over an extended period of time but not in the first gaze shift. Moreover, sociosexuality correlated positively with the proportion of gazes towards contrapposto stimuli but fell short of statistical significance. The results of the current study indicate that top-down factors play a role in how people allocate more attention to contrapposto poses.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41424820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-02-07DOI: 10.1080/13506285.2023.2198271
Irmak Hacımusaoğlu, Bien Klomberg, Neil Cohn
{"title":"Navigating meaning in the spatial layouts of comics: A cross-cultural corpus analysis","authors":"Irmak Hacımusaoğlu, Bien Klomberg, Neil Cohn","doi":"10.1080/13506285.2023.2198271","DOIUrl":"https://doi.org/10.1080/13506285.2023.2198271","url":null,"abstract":"ABSTRACT\u0000 In visual narratives like comics, not only do comprehenders need to track shifts in characters, space, and time, but they do so across a spatial layout. While many scholars and comic artists have speculated about connections between meaning and layout in comics, few empirical studies have examined this relationship. We investigated whether situational changes between time, characters, or space interacted with page layouts, by looking at across-page, across-constituent, and within-constituent transitions in a corpus of 134 annotated comics from North America, Europe, and Asia. Panels shifting within constituents (e.g., while moving within a row) changed the situation the least, while those across pages and across constituents (like in a row break) had more situational changes. The boundary of a page especially aligned with changes in spatial location of the scene. In addition, discontinuous changes primarily aligned with across-page transitions. Cross-cultural analyses indicated that Asian comics convey meaning across panels in ways that are relatively less constrained by layouts, while American and European comics use the page as a unit to group and segment spatial information. Such results indicate a partial correspondence between layout and meaning, but with different cultural constraints.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48365809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-02-07DOI: 10.1080/13506285.2023.2208887
Sharon Levy, N. Turk-Browne, Liat Goldfarb
{"title":"Impaired visuo-spatial statistical learning with mathematical learning difficulties","authors":"Sharon Levy, N. Turk-Browne, Liat Goldfarb","doi":"10.1080/13506285.2023.2208887","DOIUrl":"https://doi.org/10.1080/13506285.2023.2208887","url":null,"abstract":"ABSTRACT Rapid extraction of temporal and spatial patterns from repeated experience is known as statistical learning (SL). Studies on SL show that after few minutes of exposure, observers exhibit knowledge of regularities hidden in a sequence or array of objects. Previous findings suggest that visuo-spatial statistical learning might relate to numerical processing mechanisms. Hence, the current study examines for the first time visuo-spatial SL in a population with a deficiency in the numerical system: individuals with mathematical learning difficulties (MLD). Thirty-two female participants (16 with MLD and 16 matched controls) were tested on a visuo-spatial statistical learning task. The results revealed that visuo-spatial SL was significantly worse in the MLD group than in a control group, although MLD performed as well as controls in a visual discrimination task. In addition, whereas the control group showed reliable visuo-spatial SL above chance, the MLD group did not. Because learned regularities can broadly facilitate cognitive processing, individuals with MLD may thus suffer from additional behavioural challenges beyond their numerical difficulties.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42322848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attention to the fine-grained aspect of words in the environment emerges in preschool children with high reading ability","authors":"Licheng Xue, Ying Xiao, Tianying Qing, U. Maurer, Wei Wang, Huidong Xue, X. Weng, Jing-Guo Zhao","doi":"10.1080/13506285.2023.2194697","DOIUrl":"https://doi.org/10.1080/13506285.2023.2194697","url":null,"abstract":"ABSTRACT Attention to words is closely related to the process of learning to read. However, it remains unclear how attention to words in environmental print (such as words on product labels) is changed with the growth of preschool children’s reading ability. We thus used eye tracking technique to compare attention to words in environmental print in children at low (32, 15 males, 5.12 years) and high (32, 17 males, 5.16 years) reading levels during a free viewing task. To characterize which aspects of visual word form children attend to, we constructed three types of stimuli embedded in the same context: words in environment print, symbol strings (similar shape to words but without strokes), and character strings (comparable with words in the number of strokes and the structures). We observed that children at both reading levels showed lower percentages of fixations and fixation time in words relative to symbol strings, suggesting they start to attend to the coarse aspect of visual word form. Interestingly, only children at higher reading level showed lower percentages of fixations and fixation time for words relative to character strings, suggesting that attention to the fine-grained aspect of visual word form emerged, and was closely to reading ability.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45955011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-01-02DOI: 10.1080/13506285.2023.2192992
J. Enns, Rachel C. Lin-Yang, Veronica Dudarev
{"title":"Moving to maintain perceptual and social constancy","authors":"J. Enns, Rachel C. Lin-Yang, Veronica Dudarev","doi":"10.1080/13506285.2023.2192992","DOIUrl":"https://doi.org/10.1080/13506285.2023.2192992","url":null,"abstract":"ABSTRACT Past research on object constancy has tended to treat the viewer as a passive observer. Here we examine viewers’ body and eye movements when they are asked to view photos of people in a gallery setting. Participants considered one individual in each photo, before indicating how socially connected they felt toward them and then moving to a spot in the gallery where they would be most comfortable when talking to them. Photographed individuals varied in their projected distance from the camera (near, far) and in their image resolution (sharp, slightly blurred). Results showed that participants looked more directly at near versus far individuals and at sharp versus blurred individuals. They also rated their social connection as stronger when the images were near versus far and sharp versus blurred. Where participants stood when making these ratings was strongly correlated with the projected distance of the images and with their ratings of social connection. These findings are discussed with regard to brain mechanisms for maintaining stability in our perceptions of geometric and social aspects of our world. They also highlight our inherent tendency to attribute qualities of our perceptual experiences to objects in that world.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49250071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-01-02DOI: 10.1080/13506285.2023.2188335
F. Doidy, P. Desaunay, C. Rebillard, P. Clochon, A. Lambrechts, P. Wantzen, F. Guénolé, J. Baleyte, F. Eustache, D. Bowler, K. Lebreton, B. Guillery-Girard
{"title":"How scene encoding affects memory discrimination: Analysing eye movements data using data driven methods","authors":"F. Doidy, P. Desaunay, C. Rebillard, P. Clochon, A. Lambrechts, P. Wantzen, F. Guénolé, J. Baleyte, F. Eustache, D. Bowler, K. Lebreton, B. Guillery-Girard","doi":"10.1080/13506285.2023.2188335","DOIUrl":"https://doi.org/10.1080/13506285.2023.2188335","url":null,"abstract":"ABSTRACT Encoding of visual scenes remains under-explored due to methodological limitations. In this study, we evaluated the relationship between memory accuracy for visual scenes and eye movements at encoding. First, we used data-driven methods, a fixation density map (using iMap4) and a saliency map (using GBVS), to analyse the visual attention for items. Second, and in a more novel way, we conducted scanpath analyses without a priori (using ScanMatch). Scene memory accuracy was assessed by asking participants to discriminate identical scenes (targets) among rearranged scenes sharing some items with targets (distractors) and new scenes. Shorter fixation duration in regions of interest (ROIs) at encoding was associated with a better rejection of distractors; there was no significant difference in the relative fixation time in ROIs at encoding, between subsequent hits and misses at test. Hence, density of eye fixations in data-driven ROIs seems to be a marker of subsequent memory discrimination and pattern separation. Interestingly, we also identified a negative correlation between average MultiDimensional Scaling (MDS) distance scanpaths and the correct rejection of distractors, indicating that scanpath consistency significantly affects the ability to discriminate distractors from targets. These data suggest that visual exploration at encoding participates in discrimination processes at test.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42225709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-01-01Epub Date: 2023-04-04DOI: 10.1080/13506285.2023.2192991
Mar S Nikiforova, Rosemary A Cowell, David E Huber
{"title":"Gestalt formation promotes awareness of suppressed visual stimuli during binocular rivalry.","authors":"Mar S Nikiforova, Rosemary A Cowell, David E Huber","doi":"10.1080/13506285.2023.2192991","DOIUrl":"10.1080/13506285.2023.2192991","url":null,"abstract":"<p><p>Continuous flash suppression leverages binocular rivalry to render observers unaware of a static image for several seconds. To achieve this effect, rapidly flashing noise masks are presented to the dominant eye while a static stimulus is presented to the non-dominant eye. Eventually \"breakthrough\" occurs, wherein awareness shifts to the static image shown to the non-dominant eye. We tested the hypothesis that Gestalt formation can promote breakthrough. In two experiments, we presented pacman-shaped objects that might or might not align to form illusory Kanizsa objects. To measure the inception of breakthrough, observers were instructed to press a key at the moment of partial breakthrough. After pressing the key, which stopped the trial, observers reported how many pacmen were seen and where they were located. Supporting the Gestalt hypothesis, breakthrough was faster when the pacmen were aligned and observers more often reported pairs of pacmen if they were aligned. To address whether these effects reflected illusory shape perception, a computational model was applied to the pacman report distributions and breakthrough times for an experiment with four pacmen. A full account of the data required an increased joint probability of reporting all four pacmen, suggesting an influence of a perceived illusory cross.</p>","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10721231/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48256052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2022-11-26DOI: 10.1080/13506285.2023.2169802
Lars-Michael Schöpper, C. Frings
{"title":"Inhibition of return (IOR) meets stimulus-response (S-R) binding: Manually responding to central arrow targets is driven by S-R binding, not IOR","authors":"Lars-Michael Schöpper, C. Frings","doi":"10.1080/13506285.2023.2169802","DOIUrl":"https://doi.org/10.1080/13506285.2023.2169802","url":null,"abstract":"ABSTRACT Localizing targets repeating or changing their position typically leads to a benefit for location changes, that is, inhibition of return (IOR). Yet, IOR is mostly absent when sequentially responding to arrows pointing to the left or right. Previous research suggested that responding to central arrow targets resembles a discrimination response. For the latter, action control theories expect the modulation of response repetitions and changes by task-irrelevant feature repetitions and changes (e.g., colour), caused by stimulus-response (S-R) binding – a modulation typically absent in localization performance. In the current study, participants gave left and right responses to peripheral targets repeating or changing their position, and to central arrow targets repeating or changing their pointing direction. Targets could repeat or change their colour. For central targets, responses were heavily modulated by colour repetitions and changes, suggesting S-R binding. No S-R binding, but only IOR was found for peripheral targets. Analysis of reaction time percentiles suggested that this pattern was not caused by fast response execution. These results show that S-R binding approaches allow to explain effects typically discussed in the context of attentional orienting, highlighting the similarities of two research strands working in parallel for years without much of exchange.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42778500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2022-11-26DOI: 10.1080/13506285.2023.2186997
Yasmine Giovaola, Viviana Rojo Martinez, S. Ionta
{"title":"Degraded vision affects mental representations of the body","authors":"Yasmine Giovaola, Viviana Rojo Martinez, S. Ionta","doi":"10.1080/13506285.2023.2186997","DOIUrl":"https://doi.org/10.1080/13506285.2023.2186997","url":null,"abstract":"ABSTRACT The mental representations of the body depend on current perceptions, building on more reliable sensory inputs and decreasing the weight of less reliable afferences. While somatosensory manipulations have been repeatedly investigated, less is known about vision. We hypothesized that a decrease in visual input may result in an augmented relevance of somatosensation to mentally represent the body. 29 neurotypical participants performed mental rotation of hand images, while image visibility was manipulated: keeping the same background (grey), the contrast was decreased by 60% (Degraded Vision) with respect to Baseline. Results showed that Degraded Vision (1) slowed down the mental rotation of hand images typically sensitive to degrees of rotation (dorsum), and (2) established a rotation-dependent latency profile for the mental rotation of hand images that are not typically affected by rotation (little finger). Since the sensitivity to rotation indicates the recruitment of visual or somatosensory strategies to mentally represent the body, our findings indicate that in presence of degraded visual input, somatosensation had a heavier weight than vision in mental rotation. This suggests a relative shift from a pictorial representation of the body (body image) to a somatosensory one (body schema) as a function of the most reliable/available sensory input.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59827997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2022-11-26DOI: 10.1080/13506285.2023.2175945
Molly R. McKinney, Heather A. Hansen, Jessica L. Irons, Andrew B. Leber
{"title":"Attentional strategy choice is not predicted by cognitive ability or academic performance","authors":"Molly R. McKinney, Heather A. Hansen, Jessica L. Irons, Andrew B. Leber","doi":"10.1080/13506285.2023.2175945","DOIUrl":"https://doi.org/10.1080/13506285.2023.2175945","url":null,"abstract":"ABSTRACT People exhibit vast individual variation in the degree to which they choose optimal attentional control strategies during visual search, although it is not well understood what predicts such variation. In the present study, we sought to determine whether markers of real-world achievement (assessed via undergraduate GPA) and cognitive ability (e.g., general fluid intelligence) could predict attentional strategy optimization (assessed via the Adaptive Choice Visual Search task; [Irons, J. L., & Leber, A. B. (2018). Characterizing individual variation in the strategic use of attentional control. Journal of Experimental Psychology: Human Perception and Performance, 44(10), 1637–1654]). Results showed that, while general cognitive ability predicted visual search response time and accuracy, neither achievement nor cognitive ability metrics could predict attentional strategy optimization. Thus, the determinants of attentional strategy remain elusive, and we discuss potential steps to shed light on this important research topic.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49112499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}