{"title":"Revealing temporal dynamics of the visuomotor system via continuous tracking of position and attribute.","authors":"Yen-Ju Chen, Zitang Sun, Shin'ya Nishida","doi":"10.1167/jov.25.8.19","DOIUrl":"10.1167/jov.25.8.19","url":null,"abstract":"<p><p>Continuous tracking is the recently developed psychophysical technique for efficiently estimating human visual temporal characteristics. The standard version of the task, referred to as position tracking (PT), asks participants to track the location of a continuously moving target by a motor response (e.g., mouse movement). Some studies have also used a variant method, attribute tracking (AT), which requires participants to track and reproduce a continuously changing attribute (e.g., luminance) of the target instead of position. For both PT and AT, the temporal dynamics of the entire system from vision to action can be estimated from the cross-correlogram (CCG) of the trajectory between the stimulus and response. The similarities and differences in CCG between PT and AT, however, remain elusive but were examined in this study. Experiment 1 compared the two CCGs using luminance-defined circular patches, color-contrast-defined patches, and luminance-defined patches with various spatial frequencies. The results indicate that the PT response was faster and less affected by the stimulus variables than the AT response. Experiment 2 showed that these differences could be reduced by making the visuomotor mapping of PT less direct by reversing the motor response direction and by making the local stimulus change magnitude comparable between PT and AT. The comparison with the traditional reaction time measures (Experiment 3) further showed that the peak latency of CCG from PT aligned better with the simple reaction time, whereas that from AT aligned better with the choice reaction time. These results indicate that CCG is more sluggish for AT than for PT because AT includes the process of identifying the stimulus content (attribute change direction) and mapping it to a motor response arbitrarily specified by the experimenter, and because the effective stimulus change magnitude for AT is often weaker than that for PT. These findings provide a clearer understanding of the meaning of CCGs measured by the two types of continuous tracking tasks.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"19"},"PeriodicalIF":2.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12309616/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Swantje Mahncke, Lina Eicke-Kanani, Ole Fabritz, Thomas S A Wallis
{"title":"The visibility of Eidolon distortions in things and stuff.","authors":"Swantje Mahncke, Lina Eicke-Kanani, Ole Fabritz, Thomas S A Wallis","doi":"10.1167/jov.25.8.12","DOIUrl":"10.1167/jov.25.8.12","url":null,"abstract":"<p><p>The visibility of alterations to the physical structure of images (distortions) depends on the image content and on viewing conditions. Here we measure human sensitivity to a class of image distortions, Eidolons, applied to image sets containing a range of content, from object images or scenes, to textures and materials. In an odd-one-out task with peripherally presented images, we replicate previous findings that distortions are harder to detect in images which contain large regions of texture or material and fewer segmentable object boundaries. Next, we reason that an image-computable model able to capture the critical aspects of encoding transformations should be able to predict the discriminability of distortion-image pairs, irrespective of image content. We therefore test a variety of image-computable models, treating them as perceptual metrics, using a simple hierarchical regression framework. Of the tested models, the texture statistics of the Portilla and Simoncelli model best predicted performance, beating simple Fourier-spectrum-based transforms and a biologically inspired LGN statistics model. There remains, however, a substantial gap between the best single image-computable metric and an oracle model that has information about the experimental parameters and image labels. This work compliments existing datasets in image distortion discriminability and image quality, and extends existing frameworks for comparatively evaluating the predictive performance of perceptual metrics.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"12"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12255176/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144602132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Otto Lappi, Jami Pekkanen, Aleksandra Krajnc, Lucas Iacono, Adrian Remonda, Eduardo Veas
{"title":"The racer's gaze: Visual strategy in high-speed sports expertise.","authors":"Otto Lappi, Jami Pekkanen, Aleksandra Krajnc, Lucas Iacono, Adrian Remonda, Eduardo Veas","doi":"10.1167/jov.25.8.16","DOIUrl":"10.1167/jov.25.8.16","url":null,"abstract":"<p><p>Eye movements shape all visual input to the brain, making their understanding essential for studying perception and visual guidance in dynamic environments. Research on expert performance indicates that gaze coordination is a key feature of expertise in, for example, sports. Mobile eye tracking provides the opportunity to investigate gaze strategies supporting the skilled actions of an athlete and can deliver insight into the underlying perceptual-cognitive processes. We systematically observed the visual strategy of an expert racing driver performing a domain-representative task. Synchronized gaze, telemetry, and localization data from a high-grade simulator were analyzed to address four classes of research questions: oculomotor, scene analysis, timing, and point of vantage. The results (a) replicate the seminal tangent point orientation (pre-turn-in saccades), (b) describe both the oculomotor signature and timing signature of the steering with the head strategy, (c) identify a novel saccade strategy (pre-full-throttle saccades), and (d) reveal a previously unstudied spatial regularity in the serial organization of behavior: a tight localization of the points of vantage where the pre-turn-in saccades and pre-full-throttle saccades are made. The gaze strategies are not tied to specifics of the task and may be relevant for understanding expert performance in other fields with similar visuomotor and cognitive demands. The method of cross-examining an integrated dataset by multiple parametrizations itself complements traditional research designs with predefined task constraints and restrictions. We are not aware of any study that has simultaneously addressed all four kinds of research questions simultaneously.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"16"},"PeriodicalIF":2.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12302049/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144676352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computational evidence for an inverse relationship between retinal and brain complexity.","authors":"Mitchell B Slapik","doi":"10.1167/jov.25.8.9","DOIUrl":"10.1167/jov.25.8.9","url":null,"abstract":"<p><p>Visual neuroscientists have long observed an inverse relationship between brain and retinal complexity: As brain complexity increases across species, retinas adapt to simpler visual processing. Lindsey et al. previously provided a computational explanation for this pattern, showing that shallow networks encode complex features in their first stage of processing, whereas deep networks encode simpler features. Here, these findings are extended to a suite of representational analyses and show that shallow networks generate high-dimensional representations with linear decision boundaries and specific visual features that can feed directly into behavioral responses. In contrast, deep networks generate low-dimensional representations with nonlinear decision boundaries and general visual features. These representations require further processing before they can produce the appropriate behavioral response. In summary, the findings extend a longstanding principle linking simpler retinal features to complex brains and offer a computational framework for understanding neural network behavior more generally.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12240199/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144576780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The pupil response to perceptual switches: What happens when you ignore them.","authors":"Bobicheng Zhang, Vasilii Marshev, Jan W Brascamp","doi":"10.1167/jov.25.8.5","DOIUrl":"10.1167/jov.25.8.5","url":null,"abstract":"<p><p>The pupil has been found to dilate after switches in bistable perception, prompting the suggestion that norepinephrine-based neuromodulation plays a causal role in those switches. However, the pupil dilates in response to task-relevant events in general, and, in existing work, perceptual switches were typically task-relevant (e.g., they had to be reported). As such, observed switch-related dilations may have reflected nonspecific task relevance rather than switch-specific processes. Here, we measured pupil responses to perceptual switches that were task-irrelevant. Observers viewed a rotating structure-from-motion sphere consisting of equilateral triangles that inverted at semi-random intervals. In separate conditions, observers either reported perceptual switches (rendering them task-relevant) or reported changes in the triangles' orientation (rendering the switches task-irrelevant). We then used observers' optokinetic nystagmus to infer perceptual switch moments, even when observers did not report them. Control analyses confirm the reliability of this method. We found that task-relevant switches were followed by pupil dilations, but task-irrelevant ones were not. These results suggest that pupil-associated neuromodulation, although closely linked to task-relevant events, may not have any specific tie with perceptual bistability. These results are consistent with results we recently reported for binocular rivalry, indicating commonality across distinct forms of perceptual bistability.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12236628/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144555563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring spatial and temporal properties of visual crowding using continuous psychophysics.","authors":"Dilce Tanriverdi, Frans W Cornelissen","doi":"10.1167/jov.25.7.7","DOIUrl":"10.1167/jov.25.7.7","url":null,"abstract":"<p><p>Visual crowding refers to the difficulty in recognizing objects in the periphery when surrounded by clutter. Traditional trial-based paradigms, while effective in measuring spatial aspects of crowding, do not capture the temporal dynamics involved. In this study, we assessed the feasibility of a continuous psychophysics paradigm that measures both the spatial extent and temporal processes of visual crowding. Eight participants continuously tracked the orientation of a rotating Landolt C while the distance between the target and a ring-shaped flanker varied systematically over time. Participants set a reference stimulus to match the orientation of the target. The paradigm included \"jump-points,\" where the orientation of the target suddenly shifted, allowing us to measure the recovery rate of participants' tracking errors following these disruptions. Tracking accuracy was compared between flanked and isolated conditions. Additionally, participants' report errors were used to assess both the crowding extent and the temporal recovery rate from the jumps, with the crowding extent results compared with those obtained from a conventional trial-based version of the paradigm. The recovery rate was calculated by fitting an exponential decay function to participants' report errors after the jumps. The results showed that the crowding extent measured using the continuous paradigm was consistent with that obtained using trial-based methods and aligned with Bouma's rule. Moreover, flankers decreased both tracking accuracy and recovery rate following the jumps. These results demonstrate that our continuous psychophysics paradigm is useful for measuring the spatiotemporal aspects of crowding.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 7","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12173087/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144286976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of object-scene congruency with and without awareness.","authors":"Weina Zhu, Jan Drewes","doi":"10.1167/jov.25.7.3","DOIUrl":"10.1167/jov.25.7.3","url":null,"abstract":"<p><p>Scene context has been shown to influence object recognition; it is not clear what level of visual processing is required for this effect to manifest. Specifically, it is unclear if such object/context interactions may exist in the absence of conscious awareness. By conducting experiments with and without the use of continuous flash suppression (CFS), we examined how context (background) congruency affects target recognition and response time. We used animal and vehicle images in natural or man-made scenes, which formed congruent/non-congruent image groups (100 images each). By comparing among three experimental designs (b-CFS, plain 2AFC, and 2AFC-CFS), we found the response time in the congruent scenes was significantly faster than in the incongruent scenes in plain 2AFC (without suppression). This congruency effect persisted only in the vehicle group when under b-CFS suppression. When combining the two paradigms (2AFC-CFS), the results replicated the congruency effect from the plain 2AFC condition. This indicates that the congruency effect does not emerge at the lowest levels of perception, but requires additional processing, necessitating a degree of conscious access.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 7","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12161396/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Foveal crowding modifies a target's properties under a brief presentation time.","authors":"Ziv Siman-Tov, Maria Lev, Uri Polat","doi":"10.1167/jov.25.7.5","DOIUrl":"10.1167/jov.25.7.5","url":null,"abstract":"<p><p>The perception of chromatic and achromatic visual information is combined and processed in the parvocellular stream; however, they are separate processes at the early stage of the visual cortex. In our previous study, we noted that there is difficulty discriminating the color of a letter target presented at the fovea under a crowded presentation for a short time. Visual crowding occurs when an easily identified isolated stimulus becomes very difficult to identify when it is surrounded by stimuli with similar properties. One opinion is that crowding reduces the ability to identify the target but not its features (e.g., color and texture); however, some studies indicated that the ability to recognize features is also impaired under peripheral crowding conditions. Here, we investigated whether the processing of chromatic information can be impaired at the fovea using a classic crowding experiment when tested at brief presentation times (20, 40, and 120 ms). The participants reported both the target's identity and chromaticity (dual task). We found that the target's identification and color discrimination are impaired when presented for 20-40 ms but that they recover for longer presentation times. This effect is increased when temporal backward masking is added. This finding suggests that crowding resembles masking under brief presentation times and occurs at a later processing stage, after an initial masking stage.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 7","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12166505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144267832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tim Gastrell, Matt Oxner, Frank Schumann, David Carmel
{"title":"Fixation versus periphery in visual awareness: Differential effects of recent perceptual experience.","authors":"Tim Gastrell, Matt Oxner, Frank Schumann, David Carmel","doi":"10.1167/jov.25.7.2","DOIUrl":"10.1167/jov.25.7.2","url":null,"abstract":"<p><p>Processing differences between foveal and peripheral vision mean that the location of objects in the visual field can strongly influence the way we experience them. The contents of visual awareness are believed to arise from interactions between sensory stimulation and context (e.g., expectations formed by recent experience), but the effect of visual field location on these interactions remains unclear. Here, we compared the effects of recent experience on awareness at fixation versus the periphery. On each trial, observers saw a brief display of an unambiguously rotating structure-from-motion prime sphere, followed by a brief display of a probe sphere with ambiguous motion. Experiment 1 established that conscious perception of the motion direction of the probe was more likely to differ from the prime when the stimuli were presented in the periphery compared with fixation. Experiment 2 ruled out a high-level, non-retinotopic, precision-weighting account of this effect by demonstrating that, although priming was apparent when the stimulus moved from fixation to periphery or vice versa, its magnitude was the same for low-precision peripheral and high-precision fixated primes. Experiment 3 replicated the original location effect and also found stronger motion adaptation in the periphery; the effects were not correlated, though, indicating that motion adaptation cannot account for the location effect. Experiment 4 replicated the location effect again and ruled out differences in fixation stability as the underlying mechanism. Overall, our results demonstrate a robust effect of visual field location on the integration of recent visual experience during construction of perceptual awareness and highlight the need to elucidate the mechanisms underlying differential generation of visual experience across the visual field.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 7","pages":"2"},"PeriodicalIF":2.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12136115/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The latent mechanism behind binocular advantage in reading.","authors":"Zhenyu Zhang, Tingting Wang, Zile Wang, Jinmei Xiao, Qingshang Ma, Xianyuan Yang, Fang-Fang Yan, Chang-Bing Huang","doi":"10.1167/jov.25.7.6","DOIUrl":"10.1167/jov.25.7.6","url":null,"abstract":"<p><p>Most individuals read binocularly, and previous studies have found a binocular advantage in reading speed. However, the underlying mechanism of the binocular advantage in reading remains unclear. In our study, we quantified contributions from basic visual functions, basic oculomotor functions, and reading-specific eye movements to the binocular advantage in Chinese reading speed, using six tasks and 32 metrics. Consistent with prior research, we confirmed a binocular advantage in Chinese text reading, with binocular reading being approximately 4% faster than monocular reading. Interestingly, although basic visual and oculomotor functions themselves exhibited binocular advantages, they did not account for the observed binocular advantage in reading among individuals with normal vision. This finding is particularly noteworthy because it provides an important normative reference for individuals with impaired vision, in whom basic visual and oculomotor functions may serve as critical explanatory factors for reading performance. In contrast, the concurrent reduction of three reading-specific eye movement metrics-fixation count, average fixation duration, and progressive saccade count-under binocular conditions well explained the binocular advantage in reading, despite these metrics not demonstrating a binocular advantage in isolation. Our results suggest that efficient parafoveal preprocessing and faster neural processing in binocular vision might play critical roles in binocular advantage in reading for individuals with normal vision.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 7","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12169479/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}