{"title":"How does contextual information affect aesthetic appreciation and gaze behavior in figurative and abstract artwork?","authors":"Soazig Casteau, Daniel T Smith","doi":"10.1167/jov.24.12.8","DOIUrl":"10.1167/jov.24.12.8","url":null,"abstract":"<p><p>Numerous studies have investigated how providing contextual information with artwork influences gaze behavior, yet the evidence that contextually triggered changes in oculomotor behavior when exploring artworks may be linked to changes in aesthetic experience remains mixed. The aim of this study was to investigate how three levels of contextual information influenced people's aesthetic appreciation and visual exploration of both abstract and figurative art. Participants were presented with an artwork and one of three contextual information levels: a title, title plus information on the aesthetic design of the piece, or title plus information about the semantic meaning of the piece. We measured participants liking, interest and understanding of artworks and recorded exploration duration, fixation count and fixation duration on regions of interest for each piece. Contextual information produced greater aesthetic appreciation and more visual exploration in abstract artworks. In contrast, figurative artworks were highly dependent on liking preferences and less affected by contextual information. Our results suggest that the effect of contextual information on aesthetic ratings arises from an elaboration effect, such that the viewer aesthetic experience is enhanced by additional information, but only when the meaning of an artwork is not obvious.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11552055/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Corey S Shayman, Maggie K McCracken, Hunter C Finney, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr
{"title":"Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions.","authors":"Corey S Shayman, Maggie K McCracken, Hunter C Finney, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr","doi":"10.1167/jov.24.11.7","DOIUrl":"10.1167/jov.24.11.7","url":null,"abstract":"<p><p>Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11469273/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michelle R Greene, Benjamin J Balas, Mark D Lescroart, Paul R MacNeilage, Jennifer A Hart, Kamran Binaee, Peter A Hausamann, Ronald Mezile, Bharath Shankar, Christian B Sinnott, Kaylie Capurro, Savannah Halow, Hunter Howe, Mariam Josyula, Annie Li, Abraham Mieses, Amina Mohamed, Ilya Nudnou, Ezra Parkhill, Peter Riley, Brett Schmidt, Matthew W Shinkle, Wentao Si, Brian Szekely, Joaquin M Torres, Eliana Weissmann
{"title":"The visual experience dataset: Over 200 recorded hours of integrated eye movement, odometry, and egocentric video.","authors":"Michelle R Greene, Benjamin J Balas, Mark D Lescroart, Paul R MacNeilage, Jennifer A Hart, Kamran Binaee, Peter A Hausamann, Ronald Mezile, Bharath Shankar, Christian B Sinnott, Kaylie Capurro, Savannah Halow, Hunter Howe, Mariam Josyula, Annie Li, Abraham Mieses, Amina Mohamed, Ilya Nudnou, Ezra Parkhill, Peter Riley, Brett Schmidt, Matthew W Shinkle, Wentao Si, Brian Szekely, Joaquin M Torres, Eliana Weissmann","doi":"10.1167/jov.24.11.6","DOIUrl":"10.1167/jov.24.11.6","url":null,"abstract":"<p><p>We introduce the Visual Experience Dataset (VEDB), a compilation of more than 240 hours of egocentric video combined with gaze- and head-tracking data that offer an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 56 observers ranging from 7 to 46 years of age. This article outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze-tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to use and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466363/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Color-binding errors induced by modulating effects of the preceding stimulus on onset rivalry.","authors":"Satoru Abe, Eiji Kimura","doi":"10.1167/jov.24.11.10","DOIUrl":"10.1167/jov.24.11.10","url":null,"abstract":"<p><p>Onset rivalry can be modulated by a preceding stimulus with features similar to rivalrous test stimuli. In this study, we used this modulating effect to investigate the integration of color and orientation during onset rivalry using equiluminant chromatic gratings. Specifically, we explored whether this modulating effect leads to a decoupling of color and orientation in chromatic gratings, resulting in a percept distinct from either of the rivalrous gratings. The results demonstrated that color-binding errors can be observed in a form where rivalrous green-gray clockwise and red-gray counterclockwise gratings yield the percept of a bichromatic, red-green grating with either clockwise or counterclockwise orientation. These errors were observed under a brief test duration (30 ms), with both monocular and binocular presentations of the preceding stimulus. The specific color and orientation combination of the preceding stimulus was not critical for inducing color-binding errors, provided it was composed of the test color and orientation. We also found a notable covariant relationship between the perception of color-binding errors and exclusive dominance, where the perceived orientation in color-binding errors generally matched that in exclusive dominance. This finding suggests that the mechanisms underlying color-binding errors may be related to, or partially overlap with, those determining exclusive dominance. These errors can be explained by the decoupling of color and orientation in the representation of the suppressed grating, with the color binding to the dominant grating, resulting in an erroneously perceived bichromatic grating.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472883/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Microsaccadic suppression of peripheral perceptual detection performance as a function of foveated visual image appearance.","authors":"Julia Greilich, Matthias P Baumann, Ziad M Hafed","doi":"10.1167/jov.24.11.3","DOIUrl":"10.1167/jov.24.11.3","url":null,"abstract":"<p><p>Microsaccades are known to be associated with a deficit in perceptual detection performance for brief probe flashes presented in their temporal vicinity. However, it is still not clear how such a deficit might depend on the visual environment across which microsaccades are generated. Here, and motivated by studies demonstrating an interaction between visual background image appearance and perceptual suppression strength associated with large saccades, we probed peripheral perceptual detection performance of human subjects while they generated microsaccades over three different visual backgrounds. Subjects fixated near the center of a low spatial frequency grating, a high spatial frequency grating, or a small white fixation spot over an otherwise gray background. When a computer process detected a microsaccade, it presented a brief peripheral probe flash at one of four locations (over a uniform gray background) and at different times. After collecting full psychometric curves, we found that both perceptual detection thresholds and slopes of psychometric curves were impaired for peripheral flashes in the immediate temporal vicinity of microsaccades, and they recovered with later flash times. Importantly, the threshold elevations, but not the psychometric slope reductions, were stronger for the white fixation spot than for either of the two gratings. Thus, like with larger saccades, microsaccadic suppression strength can show a certain degree of image dependence. However, unlike with larger saccades, stronger microsaccadic suppression did not occur with low spatial frequency textures. This observation might reflect the different spatiotemporal retinal transients associated with the small microsaccades in our study versus larger saccades.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457924/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deconstructing the frame effect.","authors":"Mohammad Shams, Peter J Kohler, Patrick Cavanagh","doi":"10.1167/jov.24.11.8","DOIUrl":"10.1167/jov.24.11.8","url":null,"abstract":"<p><p>The perception of an object's location is profoundly influenced by the surrounding dynamics. This is dramatically demonstrated by the frame effect, where a moving frame induces substantial shifts in the perceived location of objects that flash within it. In this study, we examined the elements contributing to the large magnitude of this effect. Across three experiments, we manipulated the number of probes, the dynamics of the frame, and the spatiotemporal relationships between probes and the frame. We found that the presence of multiple probes amplified the position shift, whereas the accumulation of the frame effect over repeated motion cycles was minimal. Notably, an oscillating frame generated more pronounced effects compared to a unidirectional moving frame. Furthermore, the spatiotemporal distance between the frame and the probe played pivotal roles, with larger shifts observed near the leading edge of the frame. Interestingly, although larger frames produced stronger position shifts, the maximum shift occurred almost at the same distance relative to the frame's center across all tested sizes. Our findings suggest that the number of probes, frame size, relative probe-frame distance, and frame dynamics collectively contribute to the magnitude of the position shift.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472888/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eliana G Dellinger, Katelyn M Becker, Frank H Durgin
{"title":"Implied occlusion and subset underestimation contribute to the weak-outnumber-strong numerosity illusion.","authors":"Eliana G Dellinger, Katelyn M Becker, Frank H Durgin","doi":"10.1167/jov.24.11.14","DOIUrl":"10.1167/jov.24.11.14","url":null,"abstract":"<p><p>Four experimental studies are reported using a total of 712 participants to investigate the basis of a recently reported numerosity illusion called \"weak-outnumber-strong\" (WOS). In the weak-outnumber-strong illusion, when equal numbers of white and gray dots (e.g., 50 of each) are intermixed against a darker gray background, the gray dots seem much more numerous than the white. Two principles seem to be supported by these new results: 1) Subsets of mixtures are generally underestimated; thus, in mixtures of red and green dots, both sets are underestimated (using a matching task) just as the white dots are in the weak-outnumber-strong illusion, but 2) the gray dots seem to be filled in as if partially occluded by the brighter white dots. This second principle is supported by manipulations of depth perception both by pictorial cues (partial occlusion) and by binocular cues (stereopsis), such that the illusion is abolished when the gray dots are depicted as closer than the white dots, but remains strong when they are depicted as lying behind the white dots. Finally, an online investigation of a prior false-floor hypothesis concerning the effect suggests that manipulations of relative contrast may affect the segmentation process, which produces the visual bias known as subset underestimation.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"14"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11498648/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Serial dependencies for externally and self-generated stimuli.","authors":"Clara Fritz, Antonella Pomè, Eckart Zimmermann","doi":"10.1167/jov.24.11.1","DOIUrl":"10.1167/jov.24.11.1","url":null,"abstract":"<p><p>Our senses are constantly exposed to external stimulation. Part of the sensory stimulation is produced by our own movement, like visual motion on the retina or tactile sensations from touch. Sensations caused by our movements appear attenuated. The interpretation of current stimuli is influenced by previous experiences, known as serial dependencies. Here we investigated how sensory attenuation and serial dependencies interact. In Experiment 1, we showed that temporal predictability causes sensory attenuation. In Experiment 2, we isolated temporal predictability in a visuospatial localization task. Attenuated stimuli are influenced by serial dependencies. However, the magnitude of the serial dependence effects varies, with greater effects when the certainty of the previous trial is equal to or greater than the current one. Experiment 3 examined sensory attenuation's influence on serial dependencies. Participants localized a briefly flashed stimulus after pressing a button (self-generated) or without pressing a button (externally generated). Stronger serial dependencies occurred in self-generated trials compared to externally generated ones when presented alternately but not when presented in blocks. We conclude that the relative uncertainty in stimulation between trials determines serial dependency strengths.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11451828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ensemble percepts of colored targets among distractors are influenced by hue similarity, not categorical identity.","authors":"Lari S Virtanen, Toni P Saarela, Maria Olkkonen","doi":"10.1167/jov.24.11.12","DOIUrl":"10.1167/jov.24.11.12","url":null,"abstract":"<p><p>Color can be used to group similar elements, and ensemble percepts of color can be formed for such groups. In real-life settings, however, elements of similar color are often spatially interspersed among other elements and seen against a background. Forming an ensemble percept of these elements would require the segmentation of the correct color signals for integration. Can the human visual system do this? We examined whether observers can extract the ensemble mean hue from a target hue distribution among distractors and whether a color category boundary between target and distractor hues facilitates ensemble hue formation. Observers were able to selectively judge the target ensemble mean hue, but the presence of distractor hues added noise to the ensemble estimates and caused perceptual biases. The more similar the distractor hues were to the target hues, the noisier the estimates became, possibly reflecting incomplete or inaccurate segmentation of the two hue ensembles. Asymmetries between nominally equidistant distractors and substantial individual variability, however, point to additional factors beyond simple mixing of target and distractor distributions. Finally, we found no evidence for categorical facilitation in selective ensemble hue formation.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"12"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11498646/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alex S Baldwin, Marie-Céline Lorenzini, Annabel Wing-Yan Fan, Robert F Hess, Alexandre Reynaud
{"title":"The dichoptic contrast ordering test: A method for measuring the depth of binocular imbalance.","authors":"Alex S Baldwin, Marie-Céline Lorenzini, Annabel Wing-Yan Fan, Robert F Hess, Alexandre Reynaud","doi":"10.1167/jov.24.11.2","DOIUrl":"10.1167/jov.24.11.2","url":null,"abstract":"<p><p>In binocular vision, the relative strength of the input from the two eyes can have significant functional impact. These inputs are typically balanced; however, in some conditions (e.g., amblyopia), one eye will dominate over the other. To quantify imbalances in binocular vision, we have developed the Dichoptic Contrast Ordering Test (DiCOT). Implemented on a tablet device, the program uses rankings of perceived contrast (of dichoptically presented stimuli) to find a scaling factor that balances the two eyes. We measured how physical interventions (applied to one eye) affect the DiCOT measurements, including neutral density (ND) filters, Bangerter filters, and optical blur introduced by a +3-diopter (D) lens. The DiCOT results were compared to those from the Dichoptic Letter Test (DLT). Both the DiCOT and the DLT showed excellent test-retest reliability; however, the magnitude of the imbalances introduced by the interventions was greater in the DLT. To find consistency between the methods, rescaling the DiCOT results from individual conditions gave good results. However, the adjustments required for the +3-D lens condition were quite different from those for the ND and Bangerter filters. Our results indicate that the DiCOT and DLT measures partially separate aspects of binocular imbalance. This supports the simultaneous use of both measures in future studies.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"2"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11460568/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}