{"title":"Stealth Updates of Visual Information by Leveraging Change Blindness and Computational Visual Morphing","authors":"Shunichi Kasahara, Kazuma Takada","doi":"10.1145/3486581","DOIUrl":"https://doi.org/10.1145/3486581","url":null,"abstract":"We present an approach for covert visual updates by leveraging change blindness with computationally generated morphed images. To clarify the design parameters for intentionally suppressing change detection with morphing visuals, we investigated the visual change detection in three temporal behaviors: visual blank, eye-blink, and step-sequential changes. The results showed a robust trend of change blindness with a blank of more than 33.3 ms and with eye blink. Our sequential change study revealed that participants did not recognize changes until an average of 57% morphing toward another face in small change steps. In addition, changes went unnoticed until the end of morphing in more than 10% of all trials. Our findings should contribute to the design of covert visual updates without consuming users’ attention by leveraging change blindness with computational visual morphing.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"14 1","pages":"23:1-23:17"},"PeriodicalIF":1.6,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83512870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Surace, Marek Wernikowski, C. Tursun, K. Myszkowski, R. Mantiuk, P. Didyk
{"title":"Learning GAN-Based Foveated Reconstruction to Recover Perceptually Important Image Features","authors":"L. Surace, Marek Wernikowski, C. Tursun, K. Myszkowski, R. Mantiuk, P. Didyk","doi":"10.1145/3583072","DOIUrl":"https://doi.org/10.1145/3583072","url":null,"abstract":"A foveated image can be entirely reconstructed from a sparse set of samples distributed according to the retinal sensitivity of the human visual system, which rapidly decreases with increasing eccentricity. The use of generative adversarial networks (GANs) has recently been shown to be a promising solution for such a task, as they can successfully hallucinate missing image information. As in the case of other supervised learning approaches, the definition of the loss function and the training strategy heavily influence the quality of the output. In this work,we consider the problem of efficiently guiding the training of foveated reconstruction techniques such that they are more aware of the capabilities and limitations of the human visual system, and thus can reconstruct visually important image features. Our primary goal is to make the training procedure less sensitive to distortions that humans cannot detect and focus on penalizing perceptually important artifacts. Given the nature of GAN-based solutions, we focus on the sensitivity of human vision to hallucination in case of input samples with different densities. We propose psychophysical experiments, a dataset, and a procedure for training foveated image reconstruction. The proposed strategy renders the generator network flexible by penalizing only perceptually important deviations in the output. As a result, the method emphasized the recovery of perceptually important image features. We evaluated our strategy and compared it with alternative solutions by using a newly trained objective metric, a recent foveated video quality metric, and user experiments. Our evaluations revealed significant improvements in the perceived image reconstruction quality compared with the standard GAN-based training approach.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"20 1","pages":"1 - 23"},"PeriodicalIF":1.6,"publicationDate":"2021-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47163571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Bressolette, Sébastien Denjean, Vincent Roussarie, M. Aramaki, S. Ystad, R. Kronland-Martinet
{"title":"MovEcho: A Gesture-Sound Interface Allowing Blind Manipulations in a Driving Context","authors":"Benjamin Bressolette, Sébastien Denjean, Vincent Roussarie, M. Aramaki, S. Ystad, R. Kronland-Martinet","doi":"10.1145/3464692","DOIUrl":"https://doi.org/10.1145/3464692","url":null,"abstract":"Most recent vehicles are equipped with touchscreens, which replace arrays of buttons that control secondary driving functions, such as temperature level, strength of ventilation, GPS, or choice of radio stations. While driving, manipulating such interfaces can be problematic in terms of safety, because they require the drivers’ sight. In this article, we develop an innovative interface, MovEcho, which is piloted with gestures and associated with sounds that are used as informational feedback. We compare this interface to a touchscreen in a perceptual experiment that took place in a driving simulator. The results show that MovEcho allows for a better visual task completion related to traffic and is preferred by the participants. These promising results in a simulator condition have to be confirmed in future studies, in a real vehicle with a comparable expertise for both interfaces.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"9 1","pages":"15:1-15:19"},"PeriodicalIF":1.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74609605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Translational and Rotational Arrow Cues (TRAC) Navigation Method for Manual Alignment Tasks.","authors":"David E Usevitch, Adam J Sperry, Jake J Abbott","doi":"10.1145/3375001","DOIUrl":"https://doi.org/10.1145/3375001","url":null,"abstract":"<p><p>Many tasks in image-guided surgery require a clinician to manually position an instrument in space, with respect to a patient, with five or six degrees of freedom (DOF). Displaying the current and desired pose of the object on a 2D display such as a computer monitor is straightforward. However, providing guidance to accurately and rapidly navigate the object in 5-DOF or 6-DOF is challenging. Guidance is typically accomplished by showing distinct orthogonal viewpoints of the workspace, requiring simultaneous alignment in all views. Although such methods are commonly used, they can be quite unintuitive, and it can take a long time to perform an accurate 5-DOF or 6-DOF alignment task. In this article, we describe a method of visually communicating navigation instructions using translational and rotational arrow cues (TRAC) defined in an object-centric frame, while displaying a single principal view that approximates the human's egocentric view of the physical object. The target pose of the object is provided but typically is used only for the initial gross alignment. During the accurate-alignment stage, the user follows the unambiguous arrow commands. In a series of human-subject studies, we show that the TRAC method outperforms two common orthogonal-view methods-the triplanar display, and a sight-alignment method that closely approximates the Acrobot Navigation System-in terms of time to complete 5-DOF and 6-DOF navigation tasks. We also find that subjects can achieve 1 mm and 1° accuracy using the TRAC method with a median completion time of less than 20 seconds.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"17 1","pages":"1-19"},"PeriodicalIF":1.6,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3375001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39014571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual Attributes Analysis of Real-world Materials","authors":"J. Filip, M. Kolafová","doi":"10.1145/3301412","DOIUrl":"https://doi.org/10.1145/3301412","url":null,"abstract":"Material appearance is often represented by a bidirectional reflectance distribution function (BRDF). Although the concept of the BRDF is widely used in computer graphics and related applications, the number of actual captured BRDFs is limited due to a time and resources demanding measurement process. Several BRDF databases have already been provided publicly, yet subjective properties of underlying captured material samples, apart from single photographs, remain unavailable for users. In this article, we analyzed material samples, used in the creation of the UTIA BRDF database, in a psychophysical study with nine subjects and assessed its 12 visual, tactile, and subjective attributes. Further, we evaluated the relationship between the attributes and six material categories. We consider the presented perceptual analysis as valuable and complementary information to the database; that could aid users to select appropriate materials for their applications.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"4 1","pages":"1:1-1:19"},"PeriodicalIF":1.6,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89306950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander T Adams, Phil Adams, Elizabeth L Murnane, Mike Elfenbein, Shruti Sannon, Geri Gay, Tanzeem Choudhury, Pamara F Chang
{"title":"Keppi: A Tangible User Interface for Self-Reporting Pain.","authors":"Alexander T Adams, Phil Adams, Elizabeth L Murnane, Mike Elfenbein, Shruti Sannon, Geri Gay, Tanzeem Choudhury, Pamara F Chang","doi":"10.1145/3173574.3174076","DOIUrl":"10.1145/3173574.3174076","url":null,"abstract":"<p><p>Motivated by the need to support those self-managing chronic pain, we report on the development and evaluation of a novel pressure-based tangible user interface (TUI) for the self-report of scalar values representing pain intensity. Our TUI consists of a conductive foam-based, force-sensitive resistor (FSR) covered in a soft rubber with embedded signal conditioning, an ARM Cortex-M0 microprocessor, and Bluetooth Low Energy (BLE). In-lab usability and feasibility studies with 28 participants found that individuals were able to use the device to make reliable reports with four degrees of freedom as well map squeeze pressure to pain level and visual feedback. Building on insights from these studies, we further redesigned the FSR into a wearable device with multiple form factors, including a necklace, bracelet, and keychain. A usability study with an additional 7 participants from our target population, elderly individuals with chronic pain, found high receptivity to the wearable design, which offered a number of participant-valued characteristics (e.g., discreetness) along with other design implications that serve to inform the continued refinement of tangible devices that support pain self-assessment.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"2018 ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3173574.3174076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36774242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Top-Down Influences in the Detection of Spatial Displacement in a Musical Scene","authors":"G. Marentakis, Cathryn Griffiths, S. McAdams","doi":"10.1145/2911985","DOIUrl":"https://doi.org/10.1145/2911985","url":null,"abstract":"We investigated the detection of sound displacement in a four-voice musical piece under conditions that manipulated the attentional setting (selective or divided attention), the sound source numerosity, the spatial dispersion of the voices, and the tonal complexity of the piece. Detection was easiest when each voice was played in isolation and performance deteriorated when source numerosity increased and uncertainty with respect to the voice in which displacement would occur was introduced. Restricting the area occupied by the voices improved performance in agreement with the auditory spotlight hypothesis as did reducing the tonal complexity of the piece. Performance under increased numerosity conditions depended on the voice in which displacement occurred. The results highlight the importance of top-down processes in the context of the detection of spatial displacement in a musical scene.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"43 1","pages":"3:1-3:19"},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87591661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Proposed Methodology for Evaluating HDR False Color Maps","authors":"A. Akyüz, Osman Kaya","doi":"10.1145/2911986","DOIUrl":"https://doi.org/10.1145/2911986","url":null,"abstract":"Color mapping, which involves assigning colors to the individual elements of an underlying data distribution, is a commonly used method for data visualization. Although color maps are used in many disciplines and for a variety of tasks, in this study we focus on its usage for visualizing luminance maps. Specifically, we ask ourselves the question of how to best visualize a luminance distribution encoded in a high-dynamic-range (HDR) image using false colors such that the resulting visualization is the most descriptive. To this end, we first propose a definition for descriptiveness. We then propose a methodology to evaluate it subjectively. Then, we propose an objective metric that correlates well with the subjective evaluation results. Using this metric, we evaluate several false coloring strategies using a large number of HDR images. Finally, we conduct a second psychophysical experiment using images representing a diverse set of scenes. Our results indicate that the luminance compression method has a significant effect and the commonly used logarithmic compression is inferior to histogram equalization. Furthermore, we find that the default color scale of the Radiance global illumination software consistently performs well when combined with histogram equalization. On the other hand, the commonly used rainbow color scale was found to be inferior. We believe that the proposed methodology is suitable for evaluating future color mapping strategies as well.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"33 1","pages":"2:1-2:18"},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80271568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Visual Latency on Vehicle Driving Behavior","authors":"Björn Blissing, F. Bruzelius, Olle Eriksson","doi":"10.1145/2971320","DOIUrl":"https://doi.org/10.1145/2971320","url":null,"abstract":"Using mixed reality in vehicles provides a potential alternative to using driving simulators when studying driver-vehicle interaction. However, virtual reality systems introduce latency in the visual system that may alter driving behavior, which, in turn, results in questionable validity. Previous studies have mainly focused on visual latency as a separate phenomenon. In this work, latency is studied from a task-dependent viewpoint to investigate how participants’ driving behavior changed with increased latency. In this study, the investigation was performed through experiments in which regular drivers were subjected to different levels of visual latency while performing a simple slalom driving task. The drivers’ performances were recorded and evaluated in both lateral and longitudinal directions along with self-assessment questionnaires regarding task performance and difficulty. All participants managed to complete the driving tasks successfully, even under high latency conditions, but were clearly affected by the increased visual latency. The results suggest that drivers compensate for longer latencies by steering more and increasing the safety margins but without reducing their speed.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"6 1","pages":"5:1-5:12"},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79580902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}