{"title":"Seeing on the fly: No need for space-to-time encoding; saccade-generated transients enable fast, parallel representation of space.","authors":"Moshe Gur","doi":"10.1167/jov.25.11.4","DOIUrl":"10.1167/jov.25.11.4","url":null,"abstract":"","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"4"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12416515/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144994115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sumiya Sheikh Abdirashid, Tomas Knapen, Serge O Dumoulin
{"title":"The precision of attention controls attraction of population receptive fields.","authors":"Sumiya Sheikh Abdirashid, Tomas Knapen, Serge O Dumoulin","doi":"10.1167/jov.25.11.3","DOIUrl":"10.1167/jov.25.11.3","url":null,"abstract":"<p><p>We alter our sampling of visual space not only by where we direct our gaze, but also by where and how we direct our attention. Attention attracts receptive fields toward the attended position, but our understanding of this process is limited. Here we show that the degree of this attraction toward the attended locus is dictated not just by the attended position, but also by the precision of attention. We manipulated attentional precision while using 7T functional magnetic resonance imaging to measure population receptive field (pRF) properties. Participants performed the same color-proportion detection task either focused at fixation (0.1° radius) or distributed across the entire display (>5° radius). We observed blood oxygenation level-dependent response amplitude increases as a function of the task, with selective increases in foveal pRFs for the focused attention task and vice versa for the distributed attention task. Furthermore, cortical spatial tuning changed as a function of attentional precision. Specifically, focused attention more strongly attracted pRFs toward the attended locus compared with distributed attention. This attraction also depended on the degree of overlap between a pRF and the attention field. A Gaussian attention field model with an offset on the attention field explained our results. Together, our observations indicate the spatial distribution of attention dictates the degree of its resampling of visual space.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"3"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12410274/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Norick R Bowers, Karl R Gegenfurtner, Alexander Goettker
{"title":"Chromatic and achromatic contrast sensitivity in the far periphery.","authors":"Norick R Bowers, Karl R Gegenfurtner, Alexander Goettker","doi":"10.1167/jov.25.11.7","DOIUrl":"10.1167/jov.25.11.7","url":null,"abstract":"<p><p>The contrast sensitivity function (CSF) has been studied extensively; however, most studies have focused on the central region of the visual field. The current study aims to address two gaps in previous measurements: first, it provides a detailed measurement of the CSF for both achromatic and, importantly, chromatic stimuli in the far periphery, up to 90 dva of visual angle. Second, we describe visual sensitivity around the monocular/binocular boundary that is naturally present in the periphery. In the first experiment, the CSF was measured in three different conditions: Stimuli were either Achromatic (L + M), Red-Green (L - M) or Yellow-Violet (S - (L + M)) Gabor patches. Overall, results followed the expected patterns established in the near periphery. However, achromatic sensitivity in the far periphery was mostly underestimated by current models of visual perception, and the decay in sensitivity observed for red-green stimuli slows down in the periphery. The decay of sensitivity for yellow-violet stimuli roughly matches that of achromatic stimuli. For the second experiment, we compared binocular and monocular visual sensitivity at different locations in the visual field. We observed a consistent increase in visual sensitivity for binocular viewing in the central part of the visual field compared to monocular viewing, but this benefit already decreased within the binocular visual field in the periphery. Together, these data provide a detailed description of visual sensitivity in the far periphery. These measurements can help to improve current models of visual sensitivity and can be vital for applications in full-field visual displays in virtual and augmented reality.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12439501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frederick A A Kingdom, Xingao Clara Wang, Huayun Li, Yoel Yakobi
{"title":"When two eyes are worse than one: Binocular summation for chromatic, interocular-anti-phase, stimuli.","authors":"Frederick A A Kingdom, Xingao Clara Wang, Huayun Li, Yoel Yakobi","doi":"10.1167/jov.25.11.15","DOIUrl":"10.1167/jov.25.11.15","url":null,"abstract":"<p><p>Numerous studies have shown that sensitivity to binocular targets is higher than to its monocular components, a phenomenon known as binocular summation. Binocular summation has been demonstrated with luminance contrast targets that are not only interocularly in-phase, that is, identical in both eyes, but also interocularly anti-phase, that is, of opposite polarity in the two eyes. Here we show that for the detection of anti-phase targets defined along the red-cyan and violet-lime axes of cardinal color space two eyes are more often than not worse than one. We suggest this is because channels that detect interocular differences, or S- channels are relatively insensitive to chromatic stimuli. We tested this idea by measuring binocular summation for chromatic anti-phase targets in the context of a chromatic surround that itself was either interocularly in-phase or anti-phase. The anti-phase surrounds reduced even further binocular summation for the anti-phase targets whereas the in-phase surrounds increased the level of summation. We show that a model that combines via probability summation the independent activities of adding S+ and differencing S- channels gave a good account of the data, especially for the anti-phase targets. We conclude that binocular adding and differencing channels play an important role in binocular color vision.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"15"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12477827/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145139152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transient increases to apparent contrast by exogenous attention persist in visual working memory.","authors":"Luke Huszar, Tair Vizel, Marisa Carrasco","doi":"10.1167/jov.25.11.13","DOIUrl":"10.1167/jov.25.11.13","url":null,"abstract":"<p><p>The sensory recruitment hypothesis posits that Visual Working Memory (VWM) maintenance uses the same cortical machinery as online perception, implying similarity between the two. Characterizing similarities and differences in these representations is critical for understanding how perceptions are reformatted into durable working memories. It is unknown whether the perceptual appearance effect brought by attention is maintained in VWM. We investigated how VWM depends on attentional state by examining how transient modulations from reflexive (exogenous) attentional orienting affect the appearance of VWM representations; particularly whether VWM takes a \"snapshot\" at the time of encoding, or transient attentional dynamics continue into VWM. Specifically, we assessed whether the transient modulation to perceived contrast caused by exogenous attention is preserved when attended stimuli are encoded and maintained in VWM. Observers performed a delayed contrast comparison task in which one stimulus had to be held in VWM across a delay and compared to a second stimulus. Exogenous attention was manipulated through transient pre-cues appearing above the location of the first, second, or both stimuli before their onset. Model comparisons revealed that the transient attentional boost to perceived contrast persisted in VWM across the delay. This result indicates that VWM maintains a \"snapshot\" of the attentional-modulated perceptual representation at the time of encoding and suggests that attentional effects on vision enable us to select and protect in VWM visual information relevant to cognition and action.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"13"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12449825/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Allison Lynch, Ewa Niechwiej-Szwedo, Shi Cao, Suzanne Kearns, Elizabeth Irving
{"title":"Advancing vision standards in aviation: Embracing evidence-based approaches.","authors":"Allison Lynch, Ewa Niechwiej-Szwedo, Shi Cao, Suzanne Kearns, Elizabeth Irving","doi":"10.1167/jov.25.11.10","DOIUrl":"10.1167/jov.25.11.10","url":null,"abstract":"<p><p>This study explores the relationship between visual acuity, contrast sensitivity, and pilot performance in simulated flight scenarios, including poor weather conditions, attempting to determine minimum visual requirements for safe flight. Twenty-six participants with normal or corrected-to-normal vision and varying flight experience (0-400 flight hours) completed simulated flight circuits under different weather conditions (e.g., rain, wind) using either Cambridge Simulation Glasses or defocusing lenses to degrade vision. Flight performance was assessed subjectively by an instructor using standardized criteria and objectively via simulator data. Visual acuity and contrast sensitivity were measured at each level of visual degradation. Mixed model analysis of variance revealed significant differences in the variability of vertical speed, pitch, roll, and the slope of altitude descent as a function of vision degradation level and weather conditions. There was also a significant main effect of vision degradation type (scatter or defocus) on the slope of altitude descent. Post hoc analyses indicated flight performance was first affected at 1.0 and 1.3 logarithm of the minimum angle of resolution degradation with scattering and defocusing lenses, respectively. These results suggest that current vision standards should potentially be reevaluated for them to be more based on evidence.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"10"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12439511/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145042119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Continuous affect tracking reveals that overestimation during the recollection of affect is idiosyncratic and stable.","authors":"Jefferson Ortega, David Whitney","doi":"10.1167/jov.25.11.14","DOIUrl":"10.1167/jov.25.11.14","url":null,"abstract":"<p><p>Humans often make summarized visual judgments about previously experienced affective situations to inform future decisions. However, these summarized judgments are subject to an overestimation bias: Negative events are recalled as more negative and positive events as more positive than they truly were. It is currently unknown whether the strength of overestimation bias in affective judgments varies across observers. If this overestimation bias represents an observer-specific cognitive trait, it should display idiosyncratic and stable individual differences. Here, we investigated whether the overestimation bias in perceived affect is idiosyncratic and stable within observers across days and different stimuli. Using a novel continuous psychophysics measure of perceived affect, observers continuously tracked, in real-time, the affect of people in videos using a two-dimensional valence-arousal rating grid. At the end of each video, participants then reported what they believed to be the average affect of the previously tracked person. By comparing observers' continuous ratings with the average affect reported at the end of the video, we found that observers often overestimated the affect in their summarized judgments. Importantly, the strength of the overestimation bias was unique to each observer and stable across days and across different sets of videos. Our findings also highlight the value of the continuous psychophysical affect tracking paradigm: Continuous affect tracking was reliable and accurate, with high between-observer agreement, and it can be collected both online and in the lab. Together, our results suggest that continuous affect tracking is a powerful approach to isolate and identify idiosyncratic perceptual and cognitive mechanisms of affect understanding.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"14"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476158/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eva M J L Postuma, Gera A de Haan, Joost Heutink, Frans W Cornelissen
{"title":"Virtual street crossing and scanning behavior in people with hemianopia: A step toward successful crossings.","authors":"Eva M J L Postuma, Gera A de Haan, Joost Heutink, Frans W Cornelissen","doi":"10.1167/jov.25.11.1","DOIUrl":"10.1167/jov.25.11.1","url":null,"abstract":"<p><p>Individuals with homonymous hemianopia (HH) may benefit from adopting compensatory crossing and scanning strategies to successfully cross streets. In this study, we explored the effect of HH on street crossing outcomes, crossing behavior and scanning behavior in a virtual environment. Individuals with real HH (N = 18), unimpaired vision (N = 18), and simulated HH (N = 18) crossed a virtual street displayed through a head-mounted display. Virtual cars approached from both directions, traveling at a speed of either 30 or 50 km/h. Participants' crossing and scanning behaviors were recorded and analyzed across groups and the two car speeds. Although individuals with real and simulated HH took more time to cross compared to individuals with unimpaired vision depending on the car speed, the number of collisions and time-to-contact after crossings did not differ between groups. We observed no differences in the selection of car gaps, crossing initiation, and scanning behavior between groups. Our findings suggest that individuals with real and simulated HH align their crossing behavior to their visuomotor capabilities by using varying compensatory strategies. HH did not alter scanning behavior before crossing a virtual street. Despite its current shortcomings, virtual reality holds promise for street crossing research and rehabilitation.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"1"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12410284/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alice Gao, Samyukta Jayakumar, Marcello Maniglia, Brian Curless, Ira Kemelmacher-Shlizerman, Steven M Seitz, Aaron R Seitz
{"title":"Don't look at the camera: Achieving perceived eye contact in remote video communication.","authors":"Alice Gao, Samyukta Jayakumar, Marcello Maniglia, Brian Curless, Ira Kemelmacher-Shlizerman, Steven M Seitz, Aaron R Seitz","doi":"10.1167/jov.25.11.8","DOIUrl":"10.1167/jov.25.11.8","url":null,"abstract":"<p><p>Eye contact is a crucial aspect of social interaction, conveying social cues based on the direction of one's gaze. Perceiving eye contact affects behavior and social processing. The widespread use of remote video conferencing technologies impacts these social cues, because most technologies do not support natural eye contact. We consider the question of how to best achieve the perception of eye contact when a person is captured by a camera and then rendered on a two-dimensional display. To test this, 17 participants were asked to rate whether 3 actors, photographed while looking at different vertical locations, were making eye contract (yes-no analysis), or were looking up or down (up-down analysis). We quantitatively assessed the gaze direction required to optimize the perception of eye contact with the camera lens. Contrary to conventional wisdom, which suggests looking directly into the camera leads to the perception of eye contact, results from both the yes-no and the up-down analyses showed that it is preferable to look approximately 2° below the camera lens. These results provide a surprising answer to the question of where to look to convey an impression of eye contact in screen-mediated interactions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12439499/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gaze when walking to grasp an object in the presence of obstacles.","authors":"Dimitris Voudouris, Eli Brenner","doi":"10.1167/jov.25.11.12","DOIUrl":"10.1167/jov.25.11.12","url":null,"abstract":"<p><p>People generally look at positions that are important for their current actions, such as objects they intend to grasp. What if there are obstacles on their path to such objects? We asked participants to walk into a room and pour the contents of a cup placed on a table into another cup elsewhere on the table. There were two small obstacles on the floor between the door and the table. There was a third obstacle on the table near the target cup. Participants often looked at the items on the table from the beginning, but, as they approached and entered the room, they often looked at the floor near the obstacles, although there was nothing particularly informative to see there. Thus they primarily relied on peripheral vision and memory of where they had seen obstacles to avoid knocking over the obstacles. As they approached the table, they mainly looked at the object that they intended to grasp and the obstacle near it. We conclude that people mainly look at positions at which they plan to physically interact with the environment, rather than at items that constrain such interactions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"12"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12449826/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}