Andrea Ghiani, Daan Amelink, Eli Brenner, Ignace T C Hooge, Roy S Hessels
{"title":"When knowing the activity is not enough to predict gaze.","authors":"Andrea Ghiani, Daan Amelink, Eli Brenner, Ignace T C Hooge, Roy S Hessels","doi":"10.1167/jov.24.7.6","DOIUrl":"10.1167/jov.24.7.6","url":null,"abstract":"<p><p>It is reasonable to assume that where people look in the world is largely determined by what they are doing. The reasoning is that the activity determines where it is useful to look at each moment in time. Assuming that it is vital to accurately judge the positions of the steps when navigating a staircase, it is surprising that people differ a lot in the extent to which they look at the steps. Apparently, some people consider the accuracy of peripheral vision, predictability of the step size, and feeling the edges of the steps with their feet to be good enough. If so, occluding part of the view of the staircase and making it more important to place one's feet gently might make it more beneficial to look directly at the steps before stepping onto them, so that people will more consistently look at many steps. We tested this idea by asking people to walk on staircases, either with or without a tray with two cups of water on it. When carrying the tray, people walked more slowly, but they shifted their gaze across steps in much the same way as they did when walking without the tray. They did not look at more steps. There was a clear positive correlation between the fraction of steps that people looked at when walking with and without the tray. Thus, the variability in the extent to which people look at the steps persists when one makes walking on the staircase more challenging.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 7","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11238878/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jean-Baptiste Durand, Sarah Marchand, Ilyas Nasres, Bruno Laeng, Vanessa De Castro
{"title":"Illusory light drives pupil responses in primates.","authors":"Jean-Baptiste Durand, Sarah Marchand, Ilyas Nasres, Bruno Laeng, Vanessa De Castro","doi":"10.1167/jov.24.7.14","DOIUrl":"10.1167/jov.24.7.14","url":null,"abstract":"<p><p>In humans, the eye pupils respond to both physical light sensed by the retina and mental representations of light produced by the brain. Notably, our pupils constrict when a visual stimulus is illusorily perceived brighter, even if retinal illumination is constant. However, it remains unclear whether such perceptual penetrability of pupil responses is an epiphenomenon unique to humans or whether it represents an adaptive mechanism shared with other animals to anticipate variations in retinal illumination between successive eye fixations. To address this issue, we measured the pupil responses of both humans and macaque monkeys exposed to three chromatic versions (cyan, magenta, and yellow) of the Asahi brightness illusion. We found that the stimuli illusorily perceived brighter or darker trigger differential pupil responses that are very similar in macaques and human participants. Additionally, we show that this phenomenon exhibits an analogous cyan bias in both primate species. Beyond evincing the macaque monkey as a relevant model to study the perceptual penetrability of pupil responses, our results suggest that this phenomenon is tuned to ecological conditions because the exposure to a \"bright cyan-bluish sky\" may be associated with increased risks of dazzle and retinal damages.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 7","pages":"14"},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11271809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan J Koenderink, Andrea J van Doorn, Doris I Braun
{"title":"\"Warm,\" \"cool,\" and the colors.","authors":"Jan J Koenderink, Andrea J van Doorn, Doris I Braun","doi":"10.1167/jov.24.7.5","DOIUrl":"10.1167/jov.24.7.5","url":null,"abstract":"<p><p>Participants judged affective cooler/warmer gradients around a 12-step color circle. Each pair of adjacent colors was presented twice (left-right reversed), all in random order. Participants readily performed the task, but their settings do not correlate very well. Individual responses were compared with a small number of canonical templates. For a little less than one-half of the participants responses or judgements correlate with such a template. We find a warm pole (in the orange environment) and a cool pole (in the teal environment) connected with two tracks that tend to have one or more gaps or weak, even inverted links. We conclude that the common artistic cool-warm polarity is only weakly reflected in responses of our observers. If it does, the observers apparently use categorical warm and cool poles and may be uncertain in relating adjacent hue steps along the 12-step color circle.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 7","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11235144/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141555745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Instruction alters the influence of allocentric landmarks in a reach task.","authors":"Lina Musa, Xiaogang Yan, J Douglas Crawford","doi":"10.1167/jov.24.7.17","DOIUrl":"10.1167/jov.24.7.17","url":null,"abstract":"<p><p>Allocentric landmarks have an implicit influence on aiming movements, but it is not clear how an explicit instruction (to aim relative to a landmark) influences reach accuracy and precision. Here, 12 participants performed a task with two instruction conditions (egocentric vs. allocentric) but with similar sensory and motor conditions. Participants fixated gaze near the center of a display aligned with their right shoulder while a target stimulus briefly appeared alongside a visual landmark in one visual field. After a brief mask/memory delay the landmark then reappeared at a different location (same or opposite visual field), creating an ego/allocentric conflict. In the egocentric condition, participants were instructed to ignore the landmark and point toward the remembered location of the target. In the allocentric condition, participants were instructed to remember the initial target location relative to the landmark and then reach relative to the shifted landmark (same or opposite visual field). To equalize motor execution between tasks, participants were instructed to anti-point (point to the visual field opposite to the remembered target) on 50% of the egocentric trials. Participants were more accurate and precise and quicker to react in the allocentric condition, especially when pointing to the opposite field. We also observed a visual field effect, where performance was worse overall in the right visual field. These results suggest that, when egocentric and allocentric cues conflict, explicit use of the visual landmark provides better reach performance than reliance on noisy egocentric signals. Such instructions might aid rehabilitation when the egocentric system is compromised by disease or injury.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 7","pages":"17"},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11290568/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141789580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Runjie Bill Shi, Moshe Eizenman, Leo Yan Li-Han, Willy Wong
{"title":"TORONTO: A trial-oriented multidimensional psychometric testing algorithm.","authors":"Runjie Bill Shi, Moshe Eizenman, Leo Yan Li-Han, Willy Wong","doi":"10.1167/jov.24.7.2","DOIUrl":"10.1167/jov.24.7.2","url":null,"abstract":"<p><p>Bayesian adaptive methods for sensory threshold determination were conceived originally to track a single threshold. When applied to the testing of vision, they do not exploit the spatial patterns that underlie thresholds at different locations in the visual field. Exploiting these patterns has been recognized as key to further improving visual field test efficiency. We present a new approach (TORONTO) that outperforms other existing methods in terms of speed and accuracy. TORONTO generalizes the QUEST/ZEST algorithm to estimate simultaneously multiple thresholds. After each trial, without waiting for a fully determined threshold, the trial-oriented approach updates not only the location currently tested but also all other locations based on patterns in a reference data set. Since the availability of reference data can be limited, techniques are developed to overcome this limitation. TORONTO was evaluated using computer-simulated visual field tests: In the reliable condition (false positive [FP] = false negative [FN] = 3%), the median termination and root mean square error (RMSE) of TORONTO was 153 trials and 2.0 dB, twice as fast with equal accuracy as ZEST. In the FP = FN = 15% condition, TORONTO terminated in 151 trials and was 2.2 times faster than ZEST with better RMSE (2.6 vs. 3.7 dB). In the FP = FN = 30% condition, TORONTO achieved 4.2 dB RMSE in 148 trials, while all other techniques had > 6.5 dB RMSE and terminated much slower. In conclusion, TORONTO is a fast and accurate algorithm for determining multiple thresholds under a wide range of reliability and subject conditions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 7","pages":"2"},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11221609/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bi-exponential description for different forms of refractive development.","authors":"Arezoo Farzanfar, Jos J Rozema","doi":"10.1167/jov.24.7.3","DOIUrl":"10.1167/jov.24.7.3","url":null,"abstract":"<p><p>It was recently established that the axial power, the refractive power required by the eye for a sharp retinal image in an eye of a certain axial length, and the total refractive power of the eye may both be described by a bi-exponential function as a function of age (Rozema, 2023). Inspired by this result, this work explores whether these bi-exponential functions are able to simulate the various known courses of refractive development described in the literature, such as instant emmetropization, persistent hypermetropia, developing hypermetropia, myopia, instant homeostasis, modulated development, or emmetropizing hypermetropes. Moreover, the equations can be adjusted to match the refractive development of school-age myopia and pseudophakia up to the age of 20 years. All of these courses closely resemble those reported in the previous literature while simultaneously providing estimates for the underlying changes in axial and whole eye power.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 7","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11232897/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141535789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"(The limits of) eye-tracking with iPads.","authors":"Aryaman Taore, Michelle Tiang, Steven C Dakin","doi":"10.1167/jov.24.7.1","DOIUrl":"10.1167/jov.24.7.1","url":null,"abstract":"<p><p>Applications for eye-tracking-particularly in the clinic-are limited by a reliance on dedicated hardware. Here we compare eye-tracking implemented on an Apple iPad Pro 11\" (third generation)-using the device's infrared head-tracking and front-facing camera-with a Tobii 4c infrared eye-tracker. We estimated gaze location using both systems while 28 observers performed a variety of tasks. For estimating fixation, gaze position estimates from the iPad were less accurate and precise than the Tobii (mean absolute error of 3.2° ± 2.0° compared with 0.75° ± 0.43°), but fixation stability estimates were correlated across devices (r = 0.44, p < 0.05). For tasks eliciting saccades >1.5°, estimated saccade counts (r = 0.4-0.73, all p < 0.05) were moderately correlated across devices. For tasks eliciting saccades >8° we observed moderate correlations in estimated saccade speed and amplitude (r = 0.4-0.53, all p < 0.05). We did, however, note considerable variation in the vertical component of estimated smooth pursuit speed from the iPad and a catastrophic failure of tracking on the iPad in 5% to 20% of observers (depending on the test). Our findings sound a note of caution to researchers seeking to use iPads for eye-tracking and emphasize the need to properly examine their eye-tracking data to remove artifacts and outliers.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 7","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11223623/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Cuthbert, Dominic Standage, Martin Paré, Gunnar Blohm
{"title":"Visual working memory models of delayed estimation do not generalize to whole-report tasks.","authors":"Benjamin Cuthbert, Dominic Standage, Martin Paré, Gunnar Blohm","doi":"10.1167/jov.24.7.16","DOIUrl":"10.1167/jov.24.7.16","url":null,"abstract":"<p><p>Whole-report working memory tasks provide a measure of recall for all stimuli in a trial and afford single-trial analyses that are not possible with single-report delayed estimation tasks. However, most whole-report studies assume that trial stimuli are encoded and reported independently, and they do not consider the relationships between stimuli presented and reported within the same trial. Here, we present the results of two independently conducted whole-report experiments. The first dataset was recorded by Adam, Vogel, and Awh (2017) and required participants to report color and orientation stimuli using a continuous response wheel. We recorded the second dataset, which required participants to report color stimuli using a set of discrete buttons. We found that participants often group their reports by color similarity, contradicting the assumption of independence implicit in most encoding models of working memory. Next, we showed that this behavior was consistent across participants and experiments when reporting color but not orientation, two circular variables often assumed to be equivalent.Finally, we implemented an alternative to independent encoding where stimuli are encoded as a hierarchical Bayesian ensemble and found that this model predicts biases that are not present in either dataset. Our results suggest that assumptions made by both independent and hierarchical ensemble encoding models-which were developed in the context of single-report delayed estimation tasks-do not hold for the whole-report task. This failure to generalize highlights the need to consider variations in task structure when inferring fundamental principles of visual working memory.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 7","pages":"16"},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11282892/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141762163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Where was this thing again? Evaluating methods to indicate remembered object positions in virtual reality.","authors":"Immo Schuetz, Bianca R Baltaretu, Katja Fiehler","doi":"10.1167/jov.24.7.10","DOIUrl":"10.1167/jov.24.7.10","url":null,"abstract":"<p><p>A current focus in sensorimotor research is the study of human perception and action in increasingly naturalistic tasks and visual environments. This is further enabled by the recent commercial success of virtual reality (VR) technology, which allows for highly realistic but well-controlled three-dimensional (3D) scenes. VR enables a multitude of different ways to interact with virtual objects, but only rarely are such interaction techniques evaluated and compared before being selected for a sensorimotor experiment. Here, we compare different response techniques for a memory-guided action task, in which participants indicated the position of a previously seen 3D object in a VR scene: pointing, using a virtual laser pointer of short or unlimited length, and placing, either the target object itself or a generic reference cube. Response techniques differed in availability of 3D object cues and requirement to physically move to the remembered object position by walking. Object placement was the most accurate but slowest due to repeated repositioning. When placing objects, participants tended to match the original object's orientation. In contrast, the laser pointer was fastest but least accurate, with the short pointer showing a good speed-accuracy compromise. Our findings can help researchers in selecting appropriate methods when studying naturalistic visuomotor behavior in virtual environments.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 7","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11246095/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141591885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gareth D Hastings, Pavan Tiruveedhula, Austin Roorda
{"title":"Wide-field optical eye models for emmetropic and myopic eyes.","authors":"Gareth D Hastings, Pavan Tiruveedhula, Austin Roorda","doi":"10.1167/jov.24.7.9","DOIUrl":"10.1167/jov.24.7.9","url":null,"abstract":"<p><p>Ocular wavefront aberrations are used to describe retinal image formation in the study and modeling of foveal and peripheral visual functions and visual development. However, classical eye models generate aberration structures that generally do not resemble those of actual eyes, and simplifications such as rotationally symmetric and coaxial surfaces limit the usefulness of many modern eye models. Drawing on wide-field ocular wavefront aberrations measured previously by five laboratories, 28 emmetropic (-0.50 to +0.50 D) and 20 myopic (-1.50 to -4.50 D) individual optical eye models were reverse-engineered by optical design ray-tracing software. This involved an error function that manipulated 27 anatomical parameters, such as curvatures, asphericities, thicknesses, tilts, and translations-constrained within anatomical limits-to drive the output aberrations of each model to agree with the input (measured) aberrations. From those resultant anatomical parameters, three representative eye models were also defined: an ideal emmetropic eye with minimal aberrations (0.00 D), as well as a typical emmetropic eye (-0.02 D) and myopic eye (-2.75 D). The cohorts and individual models are presented and evaluated in terms of output aberrations and established population expectations, such as Seidel aberration theory and ocular chromatic aberrations. Presented applications of the models include the effect of dual focus contact lenses on peripheral optical quality, the comparison of ophthalmic correction modalities, and the projection of object space across the retina during accommodation.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 7","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11246097/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141591886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}