Aakarsh Gopisetty, Angelica Godinez, Emily A Cooper
{"title":"The effect of flashing lights on speed perception for lateral motion and motion in depth.","authors":"Aakarsh Gopisetty, Angelica Godinez, Emily A Cooper","doi":"10.1167/jov.26.5.1","DOIUrl":"https://doi.org/10.1167/jov.26.5.1","url":null,"abstract":"<p><p>Flashing lights are commonly used to enhance visibility and attract attention, but they may influence other perceptual processes such as localization, distance estimation, and motion perception. In this study, we investigated how flashing affects speed perception for stimuli moving laterally and in-depth. On each trial, observers viewed a moving white dot on a stereoscopic display that traveled either laterally or in-depth at one of several speeds (ranging from about 4-17 cm/s) and under one of three flashing conditions (none, 3 Hz, or 6 Hz). During part of the trajectory, the dot was occluded and its speed was perturbed. Speed perception was probed by asking observers to report whether the dot sped up or slowed down during occlusion. Replicating prior findings, speeds for motion-in-depth were generally underestimated relative to lateral motion, especially at higher stimulus speeds. Notably, this motion-in-depth underestimation bias was reduced when the stimulus was flashing. A follow up exploratory analysis on motion-in-depth perception also showed consistent differences between motion toward and away from the observer across both the continuous and flashing conditions. These findings contribute to our understanding of the utility of flashing lights for visual communication and suggest that flashing may specifically alter speed perception for motion-in-depth.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 5","pages":"1"},"PeriodicalIF":2.3,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147822967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J Farley Norman, William B Marcum, Maria Carmichael
{"title":"The visual perception of outdoor angular spatial relationships.","authors":"J Farley Norman, William B Marcum, Maria Carmichael","doi":"10.1167/jov.26.5.2","DOIUrl":"https://doi.org/10.1167/jov.26.5.2","url":null,"abstract":"<p><p>A single experiment evaluated younger and older observers' ability to judge angular spatial relationships in an ordinary outdoor environment. Previous research from multiple laboratories has found that the visual ability to perceive distance is either well-maintained or improves with advancing age. The present experiment investigated whether this age-related equivalence or superiority also occurs for other spatial abilities, such as the ability to judge angles. Thirty adults judged 12 angles formed from trees, signs, light poles, and stone benches. The observers' overall performance was good: 74% of the variance in the judged angles could be accounted for by variance in the physical stimulus angles (overall Pearson r correlation coefficient was 0.86). The judgments of the older observers were nevertheless more accurate than those made by the younger observers (Cohen's d was 0.72). A detailed analysis of the observers' judgments revealed consistent local distortions of particular stimulus angles such that some angles were perceived to be much larger than they actually were (physically), whereas other stimulus angles were perceived to be much smaller than their physical magnitude. These local distortions in perceived angle magnitude may be related to the presence of environmental features that are associated with (linear) perspective.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 5","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147822961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Similarity-driven compression during encoding supports biased but more precise working memory.","authors":"Janna W Wennberg, John T Serences","doi":"10.1167/jov.26.4.8","DOIUrl":"10.1167/jov.26.4.8","url":null,"abstract":"<p><p>Visual working memory (VWM) allows us to maintain and manipulate information in service of behavioral goals. Navigating rich visual environments often involves holding multiple items in VWM-some of them very similar. Recent work suggests that inter-item similarity impairs memory precision during encoding but enhances precision during active memory maintenance. The present study tested whether this inter-item similarity benefit observed during memory maintenance was due to compressing similar items into summary representations. In Experiment 1, participants encoded sample displays with four colored circles into memory: two circles were similar to each other in color, two items were dissimilar to both each other and to the similar items. Using a retrospective cue (\"retro-cue\") presented after the sample display, we manipulated the inter-item similarity of the remembered stimuli by cueing two similar items, one similar and one dissimilar item, or two dissimilar items. Consistent with prior work, we observed higher precision memory and attractive biases between similar items, consistent with compression. In Experiment 2 we observed a similarity benefit and attractive bias for similar items even in the absence of a retro-cue. Importantly, the magnitude of the similarity benefit and attractive bias was the same across valid and neutral cues, suggesting that similarity-based compression occurs relatively early in the trial, before the onset of the retro-cue. In Experiment 3, we manipulated the onset of the retro-cue to occur early during the delay period, and we replicated the results of Experiments 1 and 2. Together, these results suggest that inter-item similarity enhances VWM performance through compression that occurs early during the trial as opposed to during maintenance in memory.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 4","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13077717/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147640374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A model of the Unity High-Definition Render Pipeline, with applications to flat-panel and head-mounted display characterization.","authors":"Richard F Murray","doi":"10.1167/jov.26.4.12","DOIUrl":"https://doi.org/10.1167/jov.26.4.12","url":null,"abstract":"<p><p>Game engines such as Unity and Unreal Engine have become popular tools for creating perceptual and behavioral experiments in complex, interactive environments. They are often used with flat-panel displays, and also with head-mounted displays. Here, I describe and test a mathematical model of luminance and color in Unity's High-Definition Render Pipeline (HDRP). I show that the HDRP has several non-obvious features, such as nonlinearities applied to material properties and rendered values, that must be taken into account to show well-controlled stimuli. I also show how the HDRP can be configured to display gamma-corrected luminance and color, and I provide software to create the specialized files needed for gamma correction.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 4","pages":"12-1"},"PeriodicalIF":2.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13112491/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147787491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lateralization of facial recognition in Tibetan and Han individuals: Evidence from eye movement.","authors":"Xingchen Guo, Jialin Ma, Yongxin Li","doi":"10.1167/jov.26.4.10","DOIUrl":"10.1167/jov.26.4.10","url":null,"abstract":"<p><p>In this study, eye movement technology was used to explore the lateralization characteristics of Han and Tibetan individuals when they recognized faces of individuals of their own and other ethnicities. In Experiment 1, the faces were divided into two areas of interest: the left and right sides. The results revealed that left lateralization occurred when participants of both ethnicities recognized faces of individuals of their own and other nationalities. In Experiment 2, the faces were divided into six areas of interest: the left and right eyes, the left and right sides of the nose, and the left and right sides of the mouth. The results revealed that the focus of left lateralization was the nose and mouth for the faces of Han individuals, whereas that for the faces of Tibetan individuals was the mouth. The results indicate that left visual lateralization occurs when the faces of Tibetan and Han individuals are recognized and that this lateralization differs across different ethnicities.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 4","pages":"10"},"PeriodicalIF":2.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13101832/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147700523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Avi M Aizenman, Zoe R Goll, Raquel Gil Rodriguez, Karl R Gegenfurtner
{"title":"Swiping colors in virtual reality: How stable are color category borders?","authors":"Avi M Aizenman, Zoe R Goll, Raquel Gil Rodriguez, Karl R Gegenfurtner","doi":"10.1167/jov.26.4.9","DOIUrl":"10.1167/jov.26.4.9","url":null,"abstract":"<p><p>Human color perception involves a tradeoff between our ability to discriminate millions of continuous hues and our reliance on a few discrete linguistic categories. Although some theories suggest these category boundaries are fixed perceptual anchors, others propose that judgments adapt dynamically to the statistical distribution of recent stimuli, known as the range effect. To test the stability of these boundaries, we adapted a fast-paced \"match-to-sample\" paradigm from animal learning into an immersive VR videogame. Participants used colored sabers to strike incoming cubes, matching saber color to the cube's stripe. We tested both the blue-green boundary (aligned with low-level cone mechanisms) and the pink-purple boundary (off-axis), using hue sets equated for discriminability. After establishing baseline category borders using psychometric functions, we shifted the range of tested colors toward one category endpoint to determine if the internal border remained stable or shifted with the stimulus distribution. Across four experiments, results consistently revealed a partial shift. Rather than remaining invariant, category borders shifted systematically in the direction of the stimulus range shift. Further manipulations demonstrated that this partial shift was unaffected by the proportion of responses and occurred even when using hues which don't contain a category boundary. These findings indicate that under rapid decision-making conditions, observers' judgments are strongly influenced by the statistical structure of the immediate stimulus set, with stable categorical anchors playing a more limited role. This suggests a limited role for linguistic color categories in active, matching-based tasks, where observers likely prioritize automatic statistical adaptation over fixed categorical distinctions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 4","pages":"9"},"PeriodicalIF":2.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13101840/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147700487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ward Nieboer, Brecht Haakma, Eli Brenner, David L Mann
{"title":"Evaluating lawful relationships in saccadic eye movements with simulated vision impairment: A proof-of-concept study.","authors":"Ward Nieboer, Brecht Haakma, Eli Brenner, David L Mann","doi":"10.1167/jov.26.4.5","DOIUrl":"10.1167/jov.26.4.5","url":null,"abstract":"<p><p>The aim of this study was to examine the degree to which known lawful relationships in saccadic eye movements hold when visual acuity is artificially degraded. If lawful relationships still hold, then violations during the vision assessment could indicate that individuals are not performing with their maximum effort in an effort to exaggerate their impairment. Twelve healthy participants performed saccades between targets of different sizes and at different separations from each other. Each participant completed the task with habitual vision and with three levels of simulated impairment. Saccade duration and fixation dispersion both increased with target separation, irrespective of the simulated visual impairment (R values of 0.78 for duration and 0.27 for dispersion). Some eye movement measures retained lawful relationships with task constraints even when visual input was degraded. Although the present study does not assess the ability of these measures to detect intentional misrepresentation, the dependency of saccade duration and fixation dispersion on target separation under simulated vision impairment identifies them as candidate variables for future work aimed at improving confidence in vision assessments. Future work should examine whether saccade duration and fixation dispersion show the same lawful relationship in actual vision impairment.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 4","pages":"5"},"PeriodicalIF":2.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13069347/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147629008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visuomotor adaptation and savings to constant and varying visual feedback delays in a driving simulator.","authors":"Sam Beech, Danaë Stanton Fraser, Iain D Gilchrist","doi":"10.1167/jov.26.4.7","DOIUrl":"10.1167/jov.26.4.7","url":null,"abstract":"<p><p>Perturbations to visual feedback disrupt one's ability to use vision to guide movement, leading to disrupted visuomotor control. The visuomotor adaptation mechanism recovers control by updating the visuomotor mapping to accommodate the visual perturbation during movement. A hallmark of adaptation is savings, where individuals demonstrate faster adaptation upon subsequent exposure to the same perturbation. Although faster adaptation to a previously experienced delay has been observed in response to constant visual feedback delays in two-dimensional tracking tasks, they have not been investigated in ecologically relevant contexts where individuals perform more complex visuomotor control tasks with varying delays. Previously, delay variability has been shown to significantly impair performance within these tasks, but it remains unclear how delay variability will impact adaptation and savings. Therefore, we investigated adaptation to constant and varying delays in a driving simulator over four sessions spaced 7 days apart. Across these sessions, participants exhibited savings, reflected in reduced average absolute spatial error, a shift in the average directional road position toward the middle of the road (instructed position), and flatter learning slopes, indicating a faster approach to asymptote. Crucially, there were no significant differences between the constant and varying delay conditions in any measure. Therefore, participants adapted to the delayed visual feedback with increased efficiency upon subsequent exposure to the same temporal perturbation. Additionally, delay variability did not disrupt adaptation or savings within the driving simulator task.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 4","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13069349/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147634909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dilara Deniz Türk, Jacopo Turini Volonghi, Melissa Le-Hoa Võ
{"title":"The development of object representations in children.","authors":"Dilara Deniz Türk, Jacopo Turini Volonghi, Melissa Le-Hoa Võ","doi":"10.1167/jov.26.4.4","DOIUrl":"10.1167/jov.26.4.4","url":null,"abstract":"<p><p>Objects in scenes follow a hierarchical organization, with \"scenes\" at the top level, followed by \"phrases\", clusters of objects that share spatial and functional proximity. Within these phrases, \"anchor\" objects help predict the identity and location of smaller, dependent \"local\" objects. Previous research has shown that this hierarchy is reflected in the mental representations of objects in adults. The current study examined whether children's object representations already reflect this hierarchy. We implemented an odd-one-out task with 36 object images to collect pairwise similarity ratings from children ages 5 to 10 years. Two different groups of children received different similarity judgment instructions: One group received no explicit definition of similarity, but the other was told to base similarity on actions typically performed with the objects. We created a priori and data-driven scene hierarchy measures to evaluate how well they aligned with children's similarity judgments. Results showed that children's representations were clearly structured at the scene level, as indicated by strong effects in both hierarchy measures. In contrast, we found no reliable phrase-level effects and only a small data-driven object-type effect. Scene-level structure strengthened with age, whereas phrase- and object-type levels showed no reliable age-related change. Importantly, similarity patterns were highly comparable across both tasks, suggesting that children's object representations by default seem to be action based. These results suggest that children organize objects along the scene level of the hierarchy incorporating actions related to the objects in their representations, whereas finer-grained relations are more weakly represented and may be more difficult to detect reliably at this age.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 4","pages":"4"},"PeriodicalIF":2.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13068026/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147624527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jakub Suchojad, Samuel S Sohn, Michelle Shlivko, Jacob Feldman, Karin Stromswold
{"title":"Social wayfinding in virtual reality: Navigational decisions and eye movements in a dynamic environment.","authors":"Jakub Suchojad, Samuel S Sohn, Michelle Shlivko, Jacob Feldman, Karin Stromswold","doi":"10.1167/jov.26.4.2","DOIUrl":"10.1167/jov.26.4.2","url":null,"abstract":"<p><p>Social wayfinding refers to the process of navigating in the presence of other people. Social wayfinding entails a complex series of interrelated decisions, such as how closely to approach people and when to pass them. In this paper we report two virtual reality experiments that investigate social wayfinding in a complex, dynamic task. In these experiments, participants physically walked from one end of a simulated train station waiting room to the other, avoiding static obstacles (e.g., benches, seated and standing people) and dynamic obstacles (two rows of people walking perpendicularly to the participant's path). We model the task as a hierarchical combination of local subgoals (e.g., when and where to pass people) and a global goal (which gate to navigate toward). Although eye movements are difficult to analyze in such a dynamic task, they prove to be particularly revealing about how participants combined these local and global goals efficiently in real time. Overall, the results suggest that participants rapidly deploy a flexible combination of local and global decision strategies to navigate crowded environments efficiently.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 4","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13060728/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147595748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}