{"title":"Perception and Appreciation of Tactile Objects: The Role of Visual Experience and Texture Parameters†","authors":"A. R. Karim, Sanchary Prativa, Lora T. Likova","doi":"10.2352/j.percept.imaging.2021.4.2.020405","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.2.020405","url":null,"abstract":"This exploratory study was designed to examine the effects of visual experience and specific texture parameters on both discriminative and aesthetic aspects of tactile perception. To this end, the authors conducted two experiments using a novel behavioral (ranking) approach in blind and (blindfolded) sighted individuals. Groups of congenitally blind, late blind, and (blindfolded) sighted participants made relative stimulus preference, aesthetic appreciation, and smoothness or softness judgment of two-dimensional (2D) or three-dimensional (3D) tactile surfaces through active touch. In both experiments, the aesthetic judgment was assessed on three affective dimensions, Relaxation, Hedonics, and Arousal, hypothesized to underlie visual aesthetics in a prior study. Results demonstrated that none of these behavioral judgments significantly varied as a function of visual experience in either experiment. However, irrespective of visual experience, significant differences were identified in all these behavioral judgments across the physical levels of smoothness or softness. In general, 2D smoothness or 3D softness discrimination was proportional to the level of physical smoothness or softness. Second, the smoother or softer tactile stimuli were preferred over the rougher or harder tactile stimuli. Third, the 3D affective structure of visual aesthetics appeared to be amodal and applicable to tactile aesthetics. However, analysis of the aesthetic profile across the affective dimensions revealed some striking differences between the forms of appreciation of smoothness and softness, uncovering unanticipated substructures in the nascent field of tactile aesthetics. While the physically softer 3D stimuli received higher ranks on all three affective dimensions, the physically smoother 2D stimuli received higher ranks on the Relaxation and Hedonics but lower ranks on the Arousal dimension. Moreover, the Relaxation and Hedonics ranks accurately overlapped with one another across all the physical levels of softness/hardness, but not across the physical levels of smoothness/roughness. These findings suggest that physical texture parameters not only affect basic tactile discrimination but differentially mediate tactile preferences, and aesthetic appreciation. The theoretical and practical implications of these novel findings are discussed.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47445894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beyond Visual Aesthetics: The Role of Fractal-scaling Characteristics across the Senses†","authors":"Catherine Viengkham, B. Spehar","doi":"10.2352/j.percept.imaging.2021.4.3.030406","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.3.030406","url":null,"abstract":"The investigation of aesthetics has primarily been conducted within the visual domain. This is not a surprise, as aesthetics has largely been associated with the perception and appreciation of visual media, such as traditional artworks, photography, and architecture. However, one doesn’t need to look far to realize that aesthetics extends beyond the visual domain. Media such as film and music introduce a unique and equally rich temporally changing visual and auditory experience. Product design, ranging from furniture to clothing, strongly depends on pleasant tactile evaluations. Studies involving the perception of 1/f statistics in vision have been particularly consistent in demonstrating a preference for a 1/f structure resembling that of natural scenes, as well as systematic individual differences across a variety of visual objects. Interestingly, comparable findings have also been reached in the auditory and tactile domains. In this review, we discuss some of the current literature on the perception of 1/f statistics across the contexts of different sensory modalities.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"88 1","pages":"000406-1"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78340803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeannette R Mahoney, Claudene J George, Joe Verghese
{"title":"Introducing CatchU<sup>™</sup>: A Novel Multisensory Tool for Assessing Patients' Risk of Falling.","authors":"Jeannette R Mahoney, Claudene J George, Joe Verghese","doi":"10.2352/j.percept.imaging.2022.5.000407","DOIUrl":"10.2352/j.percept.imaging.2022.5.000407","url":null,"abstract":"<p><p>To date, only a few studies have investigated the clinical translational value of multisensory integration. Our previous research has linked the magnitude of visual-somatosensory integration (measured behaviorally using simple reaction time tasks) to important cognitive (attention) and motor (balance, gait, and falls) outcomes in healthy older adults. While multisensory integration effects have been measured across a wide array of populations using various sensory combinations and different neuroscience research approaches, multisensory integration tests have not been systematically implemented in clinical settings. We recently developed a step-by-step protocol for administering and calculating multisensory integration effects to facilitate innovative and novel translational research across diverse clinical populations and age-ranges. In recognizing that patients with severe medical conditions and/or mobility limitations often experience difficulty traveling to research facilities or joining time-demanding research protocols, we deemed it necessary for patients to be able to benefit from multisensory testing. Using an established protocol and methodology, we developed a multisensory falls-screening tool called CatchU <sup><b>™</b></sup> (an iPhone app) to quantify multisensory integration performance in clinical practice that is currently undergoing validation studies. Our goal is to facilitate the identification of patients who are at increased risk of falls and promote physician-initiated falls counseling during clinical visits (e.g., annual wellness, sick, or follow-up visits). This will thereby raise falls-awareness and foster physician efforts to alleviate disability, promote independence, and increase quality of life for our older adults. This conceptual overview highlights the potential of multisensory integration in predicting clinical outcomes from a research perspective, while also showcasing the practical application of a multisensory screening tool in routine clinical practice.</p>","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"5 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10010676/pdf/nihms-1833668.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9121492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From the Editors in Chief","authors":"B. Rogowitz, Thrasos N. Pappas","doi":"10.2352/j.percept.imaging.2021.4.1.010101","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.1.010101","url":null,"abstract":"","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"152 1","pages":"10101-1"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86225147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive Camouflage for Moving Objects","authors":"E. Burg, M. Hogervorst, A. Toet","doi":"10.2352/j.percept.imaging.2021.4.2.020502","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.2.020502","url":null,"abstract":"Abstract Targets that are well camouflaged under static conditions are often easily detected as soon as they start moving. We investigated and evaluated ways to design camouflage that dynamically adapts to the background and conceals the target while taking the variation\u0000 in potential viewing directions into account. In a human observer experiment, recorded imagery was used to simulate moving (either walking or running) and static soldiers, equipped with different types of camouflage patterns and viewed from different directions. Participants were instructed\u0000 to detect the soldier and to make a rapid response as soon as they have identified the soldier. Mean target detection rate was compared between soldiers in standard (Netherlands) Woodland uniform, in static camouflage (adapted to the local background) and in dynamically adapting camouflage.\u0000 We investigated the effects of background type and variability on detection performance by varying the soldiers’ environment (such as bushland and urban). In general, detection was easier for dynamic soldiers compared to static soldiers, confirming that motion breaks camouflage. Interestingly,\u0000 we show that motion onset and not motion itself is an important feature for capturing attention. Furthermore, camouflage performance of the static adaptive pattern was generally much better than for the standard Woodland pattern. Also, camouflage performance was found to be dependent on the\u0000 background and the local structures around the soldier. Interestingly, our dynamic camouflage design outperformed a method which simply displays the ‘exact’ background on the camouflage suit (as if it was transparent), since it is better capable of taking the variability in viewing\u0000 directions into account. By combining new adaptive camouflage technologies with dynamic adaptive camouflage designs such as the one presented here, it may become feasible to prevent detection of moving targets in the (near) future.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"30 1","pages":"20502-1"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87482065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Color Conversion in Deep Autoencoders","authors":"A. Akbarinia, Raquel Gil Rodríguez","doi":"10.2352/J.PERCEPT.IMAGING.2021.4.2.020401","DOIUrl":"https://doi.org/10.2352/J.PERCEPT.IMAGING.2021.4.2.020401","url":null,"abstract":"Studies of compensatory changes in visual functions in response to auditory loss have shown that enhancements tend to be restricted to the processing of specific visual features, such as motion in the periphery. Previous studies have also shown that deaf individuals can show greater face processing abilities in the central visual field. Enhancements in the processing of peripheral stimuli are thought to arise from a lack of auditory input and subsequent increase in the allocation of attentional resources to peripheral locations, while enhancements in face processing abilities are thought to be driven by experience with American sign language and not necessarily hearing loss. This combined with the fact that face processing abilities typically decline with eccentricity suggests that face processing enhancements may not extend to the periphery for deaf individuals. Using a face matching task, the authors examined whether deaf individuals’ enhanced ability to discriminate between faces extends to the peripheral visual field. Deaf participants were more accurate than hearing participants in discriminating faces presented both centrally and in the periphery. Their results support earlier findings that deaf individuals possess enhanced face discrimination abilities in the central visual field and further extend them by showing that these enhancements also occur in the periphery for more complex stimuli.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"4 1","pages":"20401-1"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68835375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artistic Style Meets Artificial Intelligence","authors":"Suk Kyoung Choi, S. DiPaola, Hannu Töyrylä","doi":"10.2352/j.percept.imaging.2021.4.3.030501","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.3.030501","url":null,"abstract":"Recent developments in neural network image processing motivate the question, how these technologies might better serve visual artists. Research goals to date have largely focused on either pastiche interpretations of what is framed as artistic “style” or seek to divulge heretofore unimaginable dimensions of algorithmic “latent space,” but have failed to address the process an artist might actually pursue, when engaged in the reflective act of developing an image from imagination and lived experience. The tools, in other words, are constituted in research demonstrations rather than as tools of creative expression. In this article, the authors explore the phenomenology of the creative environment afforded by artificially intelligent image transformation and generation, drawn from autoethnographic reviews of the authors’ individual approaches to artificial intelligence (AI) art. They offer a post-phenomenology of “neural media” such that visual artists may begin to work with AI technologies in ways that support naturalistic processes of thinking about and interacting with computationally mediated interactive creation.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"284 1","pages":"20501-1"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79459101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Study of Bomb Technician Threat Identification Performance on Degraded X-ray Images","authors":"J. Glover, Praful Gupta, N. Paulter, A. Bovik","doi":"10.2352/J.PERCEPT.IMAGING.2021.4.1.010502","DOIUrl":"https://doi.org/10.2352/J.PERCEPT.IMAGING.2021.4.1.010502","url":null,"abstract":"Abstract Portable X-ray imaging systems are routinely used by bomb squads throughout the world to image the contents of suspicious packages and explosive devices. The images are used by bomb technicians to determine whether or not packages contain explosive devices or device components. In events of positive detection, the images are also used to understand device design and to devise countermeasures. The quality of the images is considered to be of primary importance by users and manufacturers of these systems, since it affects the ability of the users to analyze the images and to detect potential threats. As such, there exist national standards that set minimum acceptable image-quality levels for the performance of these imaging systems. An implicit assumption is that better image quality leads to better user identification of components in explosive devices and, therefore, better informed plans to render them safe. However, there is no previously published experimental work investigating this.Toward advancing progress in this direction, the authors developed the new NIST-LIVE X-ray improvised explosive device (IED) image-quality database. The database consists of: a set of pristine X-ray images of IEDs and benign objects; a larger set of distorted images of varying quality of the same objects; ground-truth IED component labels for all images; and human task-performance results locating and identifying the IED components. More than 40 trained U.S. bomb technicians were recruited to generate the human task-performance data. They use the database to show that identification probabilities for IED components are strongly correlated with image quality. They also show how the results relate to the image-quality metrics described in the current U.S. national standard for these systems, and how their results can be used to inform the development of baseline performance requirements. They expect these results to directly affect future revisions of the standard.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"231 1","pages":"10502-1"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84060731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Psychophysical Study of Human Visual Perception of Flicker Artifacts in Automotive Digital Mirror Replacement Systems","authors":"Nicolai Behmann, Sousa Weddige, H. Blume","doi":"10.2352/J.PERCEPT.IMAGING.2021.4.1.010401","DOIUrl":"https://doi.org/10.2352/J.PERCEPT.IMAGING.2021.4.1.010401","url":null,"abstract":"Abstract Aliasing effects due to time-discrete capturing of amplitude-modulated light with a digital image sensor are perceived as flicker by humans. Especially when observing these artifacts in digital mirror replacement systems, they are annoying and can pose a risk. Therefore, ISO 16505 requires flicker-free reproduction for 90 % of people in these systems. Various psychophysical studies investigate the influence of large-area flickering of displays, environmental light, or flickering in television applications on perception and concentration. However, no detailed knowledge of subjective annoyance/irritation due to flicker from camera-monitor systems as a mirror replacement in vehicles exist so far, but the number of these systems is constantly increasing. This psychophysical study used a novel data set from real-world driving scenes and synthetic simulation with synthetic flicker. More than 25 test persons were asked to quantify the subjective annoyance level of different flicker frequencies, amplitudes, mean values, sizes, and positions. The results show that for digital mirror replacement systems, human subjective annoyance due to flicker is greatest in the 15 Hz range with increasing amplitude and magnitude. Additionally, the sensitivity to flicker artifacts increases with the duration of observation.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68835350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Hung, Chloe Callahan-Flintoft, P. Fedele, Kim F. Fluitt, Barry D. Vaughan, Anthony J. Walker, Min Wei
{"title":"Low-contrast Acuity Under Strong Luminance Dynamics and Potential Benefits of Divisive Display Augmented Reality","authors":"C. Hung, Chloe Callahan-Flintoft, P. Fedele, Kim F. Fluitt, Barry D. Vaughan, Anthony J. Walker, Min Wei","doi":"10.2352/j.percept.imaging.2020.3.3.030501","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2020.3.3.030501","url":null,"abstract":"Abstract Understanding and predicting outdoor visual performance in augmented reality (AR) requires characterizing and modeling vision under strong luminance dynamics, including luminance differences of 10000-to-1 in a single image (high dynamic range, HDR). Classic models of vision, based on displays with 100-to-1 luminance contrast, have limited ability to generalize to HDR environments. An important question is whether low-contrast visibility, potentially useful for titrating saliency for AR applications, is resilient to saccade-induced strong luminance dynamics. The authors developed an HDR display system with up to 100,000-to-1 contrast and assessed how strong luminance dynamics affect low-contrast visual acuity. They show that, immediately following flashes of 25× or 100× luminance, visual acuity is unaffected at 90% letter Weber contrast and only minimally affected at lower letter contrasts (up to +0.20 LogMAR for 10% contrast). The resilience of low-contrast acuity across luminance changes opens up research on divisive display AR (ddAR) to effectively titrate salience under naturalistic HDR luminance.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"46 1","pages":"10501-1"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78481700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}