{"title":"The Impact of Adaptation Time in High Dynamic Range Luminance Transitions","authors":"Jake Zuena, Jaclyn Pytlarz","doi":"10.2352/j.percept.imaging.2024.7.000401","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2024.7.000401","url":null,"abstract":". Modern production and distribution workflows have allowed for high dynamic range (HDR) imagery to become widespread. It has made a positive impact in the creative industry and improved image quality on consumer devices. Akin to the dynamics of loudness in audio, it is predicted that the increased luminance range allowed by HDR ecosystems could introduce unintended, high-magnitude changes. These luminance changes could occur at program transitions, advertisement insertions, and channel change operations. In this article, we present findings from a psychophysical experiment conducted to evaluate three components of HDR luminance changes: the magnitude of the change, the direction of the change (darker or brighter), and the adaptation time. Results confirm that all three components exert significant influence. We find that increasing either the magnitude of the luminance or the adaptation time results in more discomfort at the unintended transition. We find that transitioning from brighter to darker stimuli has a non-linear relationship with adaptation time, falling off steeply with very short durations.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"688 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140273230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Coding in Human Vision as a Useful Bias in Computer Vision and Machine Learning","authors":"Philipp Grüning, Erhardt Barth","doi":"10.2352/j.percept.imaging.2023.6.000402","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2023.6.000402","url":null,"abstract":"Interdisciplinary research in human vision has greatly contributed to the current state-of-the-art in computer vision and machine learning starting with low-level topics such as image compression and image quality assessment up to complex neural networks for object recognition. Representations similar to those in the primary visual cortex are frequently employed, e.g., linear filters in image compression and deep neural networks. Here, we first review particular nonlinear visual representations that can be used to better understand human vision and provide efficient representations for computer vision including deep neural networks. We then focus on i2D representations that are related to end-stopped neurons. The resulting E-nets are deep convolutional networks, which outperform some state-of-the-art deep networks. Finally, we show that the performance of E-nets can be further improved by using genetic algorithms to optimize the architecture of the network.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135346858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pictures: Crafting and Beholding","authors":"J. Koenderink, Andrea van Doorn","doi":"10.2352/j.percept.imaging.2023.6.000401","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2023.6.000401","url":null,"abstract":". The psychogenesis of visual awareness is an autonomous process in the sense that you do not “do” it. However, you have some control due to your acting in the world. We share this process with many animals. Pictorial awareness appears to be truly human. Here situational awareness splits into an “everyday vision” and a “pictorial” mode. Here we focus mainly on spatial aspects of pictorial art. You have no control whatever over the picture’s structure. The pictorial awareness is pure imagery, constrained by the (physical) structure of the picture. Crafting pictures and beholding pictures are distinct, but closely related, acts. We present an account from experimental and formal phenomenology. It results in a generic model that accounts for the bulk of formal (rare) and informal (common) observations.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68835405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transparency and Translucency in Visual Appearance of Light-Permeable Materials","authors":"Davit Gigilashvili, Tawsin Uddin Ahmed","doi":"10.2352/j.percept.imaging.2022.5.000409","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2022.5.000409","url":null,"abstract":"","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"20 1","pages":"000409-1"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81438272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Natural Scene Statistics and Distance Perception: Ground Surface and Non-ground Objects","authors":"Xavier Morin-Duchesne, M. Langer","doi":"10.2352/j.percept.imaging.2022.5.000503","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2022.5.000503","url":null,"abstract":"","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"1 1","pages":"1-12"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45369814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From the Special Issue Guest Editors","authors":"Lora T. Likova, Fang Jiang, N. Stiles, A. Tanguay","doi":"10.2352/j.percept.imaging.2022.5.000101","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2022.5.000101","url":null,"abstract":"","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"19 1","pages":"000101-1"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84609775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Davit Gigilashvili, P. Urban, Jean-Baptiste Thomas, Marius Pedersen, J. Hardeberg
{"title":"The Impact of Optical and Geometrical Thickness on Perceived Translucency Differences","authors":"Davit Gigilashvili, P. Urban, Jean-Baptiste Thomas, Marius Pedersen, J. Hardeberg","doi":"10.2352/j.percept.imaging.2022.5.000501","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2022.5.000501","url":null,"abstract":"","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"103 1","pages":"000501-1"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76700253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A K M Rezaul Karim, Sanchary Prativa, Lora T Likova
{"title":"Perception and Appreciation of Tactile Objects: The Role of Visual Experience and Texture Parameters.","authors":"A K M Rezaul Karim, Sanchary Prativa, Lora T Likova","doi":"10.2352/J.Percept.Imaging.2022.5.000405","DOIUrl":"https://doi.org/10.2352/J.Percept.Imaging.2022.5.000405","url":null,"abstract":"<p><p>This exploratory study was designed to examine the effects of visual experience and specific texture parameters on both discriminative and aesthetic aspects of tactile perception. To this end, the authors conducted two experiments using a novel behavioral (ranking) approach in blind and (blindfolded) sighted individuals. Groups of congenitally blind, late blind, and (blindfolded) sighted participants made relative stimulus preference, aesthetic appreciation, and smoothness or softness judgment of two-dimensional (2D) or three-dimensional (3D) tactile surfaces through active touch. In both experiments, the aesthetic judgment was assessed on three affective dimensions, Relaxation, Hedonics, and Arousal, hypothesized to underlie visual aesthetics in a prior study. Results demonstrated that none of these behavioral judgments significantly varied as a function of visual experience in either experiment. However, irrespective of visual experience, significant differences were identified in all these behavioral judgments across the physical levels of smoothness or softness. In general, 2D smoothness or 3D softness discrimination was proportional to the level of physical smoothness or softness. Second, the smoother or softer tactile stimuli were preferred over the rougher or harder tactile stimuli. Third, the 3D affective structure of visual aesthetics appeared to be amodal and applicable to tactile aesthetics. However, analysis of the aesthetic profile across the affective dimensions revealed some striking differences between the forms of appreciation of smoothness and softness, uncovering unanticipated substructures in the nascent field of tactile aesthetics. While the physically softer 3D stimuli received higher ranks on all three affective dimensions, the physically smoother 2D stimuli received higher ranks on the Relaxation and Hedonics but lower ranks on the Arousal dimension. Moreover, the Relaxation and Hedonics ranks accurately overlapped with one another across all the physical levels of softness/hardness, but not across the physical levels of smoothness/roughness. These findings suggest that physical texture parameters not only affect basic tactile discrimination but differentially mediate tactile preferences, and aesthetic appreciation. The theoretical and practical implications of these novel findings are discussed.</p>","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"5 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10019098/pdf/nihms-1789353.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9508763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thirsa Huisman, T. Dau, Tobias Piechowiak, Ewen N. MacDonald
{"title":"The Ventriloquist Effect is not Consistently Affected by Stimulus Realism†","authors":"Thirsa Huisman, T. Dau, Tobias Piechowiak, Ewen N. MacDonald","doi":"10.2352/j.percept.imaging.2021.4.2.020404","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.2.020404","url":null,"abstract":"Despite more than 60 years of research, it has remained uncertain if and how realism affects the ventriloquist effect. Here, a sound localization experiment was run using spatially disparate audio-visual stimuli. The visual stimuli were presented using virtual reality, allowing for easy manipulation of the degree of realism of the stimuli. Starting from stimuli commonly used in ventriloquist experiments, i.e., a light flash and noise burst, a new factor was added or changed in each condition to investigate the effect of movement and realism without confounding the effects of an increased temporal correlation of the audio-visual stimuli. First, a distractor task was introduced to ensure that participants fixated their eye gaze during the experiment. Next, movement was added to the visual stimuli while maintaining a similar temporal correlation between the stimuli. Finally, by changing the stimuli from the flash and noise stimuli to the visuals of a bouncing ball that made a matching impact sound, the effect of realism was assessed. No evidence for an effect of realism and movement of the stimuli was found, suggesting that, in simple scenarios, the ventriloquist effect might not be affected by stimulus realism.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"56 1","pages":"000404-1"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84538439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Controllable Medical Image Generation via GAN.","authors":"Zhihang Ren, Stella X Yu, David Whitney","doi":"10.2352/j.percept.imaging.2022.5.000502","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2022.5.000502","url":null,"abstract":"<p><p>Medical image data is critically important for a range of disciplines, including medical image perception research, clinician training programs, and computer vision algorithms, among many other applications. Authentic medical image data, unfortunately, is relatively scarce for many of these uses. Because of this, researchers often collect their own data in nearby hospitals, which limits the generalizabilty of the data and findings. Moreover, even when larger datasets become available, they are of limited use because of the necessary data processing procedures such as de-identification, labeling, and categorizing, which requires significant time and effort. Thus, in some applications, including behavioral experiments on medical image perception, researchers have used naive artificial medical images (e.g., shapes or textures that are not realistic). These artificial medical images are easy to generate and manipulate, but the lack of authenticity inevitably raises questions about the applicability of the research to clinical practice. Recently, with the great progress in Generative Adversarial Networks (GAN), authentic images can be generated with high quality. In this paper, we propose to use GAN to generate authentic medical images for medical imaging studies. We also adopt a controllable method to manipulate the generated image attributes such that these images can satisfy any arbitrary experimenter goals, tasks, or stimulus settings. We have tested the proposed method on various medical image modalities, including mammogram, MRI, CT, and skin cancer images. The generated authentic medical images verify the success of the proposed method. The model and generated images could be employed in any medical image perception research.</p>","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"5 ","pages":"0005021-50215"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10448967/pdf/nihms-1871254.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10104475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}