{"title":"Envisioning conversation: toward understanding and augmenting common ground","authors":"T. Nishida","doi":"10.1145/3206505.3206607","DOIUrl":"https://doi.org/10.1145/3206505.3206607","url":null,"abstract":"Our intellectual life draws on daily conversations that allow us to communicate thoughts, ideas, emotions, etc. To conduct smooth and reliable interactions, participants need to share a solid basis of common ground prior to conversation, which consists of knowledge, beliefs, and suppositions regarding the topics to discuss. Importance of common ground applies to artificial agents as well. A capability of jointly building and maintaining the common ground with people on the fly is indispensable to establish a productive relationship. Understanding and augmenting common ground is challenging, as it is both tacit and dynamic in the sense that the common ground for a situation contains plenty of tacit dimensions and it is dynamically updated as the interaction proceeds.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127186230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Ardissono, Matteo Delsanto, M. Lucenteforte, Noemi Mauro, Adriano Savoca, Daniele Scanu
{"title":"Map-based visualization of 2D/3D spatial data via stylization and tuning of information emphasis","authors":"L. Ardissono, Matteo Delsanto, M. Lucenteforte, Noemi Mauro, Adriano Savoca, Daniele Scanu","doi":"10.1145/3206505.3206516","DOIUrl":"https://doi.org/10.1145/3206505.3206516","url":null,"abstract":"In Geographical Information search, map visualization can challenge the user because results can consist of a large set of heterogeneous items, increasing visual complexity. We propose a novel visualization model to address this issue. Our model represents results as markers, or as geometric objects, on 2D/3D layers, using stylized and highly colored shapes to enhance their visibility. Moreover, the model supports interactive information filtering in the map by enabling the user to focus on different data categories, using transparency sliders to tune the opacity, and thus the emphasis, of the corresponding data items. A test with users provided positive results concerning the efficacy of the model.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131076038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Equal-height treemaps for multivariate data","authors":"K. Wittenburg, Teng-Yok Lee","doi":"10.1145/3206505.3206591","DOIUrl":"https://doi.org/10.1145/3206505.3206591","url":null,"abstract":"A well-known limitation of classic continuous treemaps is that they generally provide two (or at most a few) visual mappings for data variables apart from the hierarchical relationships. Typically, one variable maps to cell area; another maps to color. However, many data-centric tasks require human users to consider multiple variables simultaneously. The current work introduces the concept of equal-height, variable-width cells in treemaps, which affords the packing of multiple variables into the cell areas of the terminals of the hierarchy. We demonstrate how color and some largely width-invariant graphs can be utilized in the cell areas to add additional visual information in a multi-variate treemap. Examples come from machine learning and from finance applications.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128869075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards intelligible graph data visualization using circular layout","authors":"V. Guchev, P. Buono, Cristina Gena","doi":"10.1145/3206505.3206592","DOIUrl":"https://doi.org/10.1145/3206505.3206592","url":null,"abstract":"Polar coordinates have been widely used in various techniques of interactive data visualization. The spatial organization through circular and radial layouts is implemented in a wide range of statistical charts and plots and is applicable for space-filling techniques and for node-link-group diagrams. Different arrangements of dots, lines and areas in polar coordinates create grids for data distribution, aggregation and linking. This work is devoted to the study of visual notations of data and their relationships and proposes an outline of their application in designing node-link-group diagrams, in order to arrange the geometric solutions at functional and logical levels of the visual representation.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128912803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Facilitating exploration on exhibitions with augmented reality","authors":"Natalie Hube, Mathias Müller, Rainer Groh","doi":"10.1145/3206505.3206585","DOIUrl":"https://doi.org/10.1145/3206505.3206585","url":null,"abstract":"At exhibitions, visitors are usually in a completely unknown environment. Although visitors generally are informed about the topic before a visit, interests are still difficult to extract from the mass of exhibition stands and offers. In this paper we describe a concept using head-coupled AR together with recommender mechanisms for exhibitions. We present a conceptual development for a first prototype with focus on navigational aspects as well as explicit and implicit recommendations to generate input data for visually displayed recommendations.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"os-58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127719204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Designing prosthetic memory: audio or transcript, that is the question","authors":"Sandra Trullemans, Payam Ebrahimi, B. Signer","doi":"10.1145/3206505.3206545","DOIUrl":"https://doi.org/10.1145/3206505.3206545","url":null,"abstract":"Audio recordings and the corresponding transcripts are often used as prosthetic memory (PM) after meetings and lectures. While current research is mainly developing novel features for prosthetic memory, less is known on how and why audio recordings and transcripts are used. We investigate how users interact with audio and transcripts as prosthetic memory, whether interaction strategies change over time, and analyse potential differences in accuracy and efficiency. In contrast to the subjective user perception, our results show that audio recordings and transcripts are equally efficient, but that transcripts are generally preferred due to their easily accessible contextual information. We further identified that prosthetic memory is not only used as a recall aid but frequently also consulted for verifying information that has been recalled from organic memory (OM). Our findings are summarised in a number of design implications for prosthetic memory solutions.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132332047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A wearable immersive web-virtual reality approach to remote neurodevelopmental disorder therapy","authors":"Mariano Etchart, Alessandro Caprarelli","doi":"10.1145/3206505.3206595","DOIUrl":"https://doi.org/10.1145/3206505.3206595","url":null,"abstract":"Our research exploits the learning potential of Wearable Immersive Virtual Reality (WIVR) applied to children with neurodevelopmental disorders (NDD), particularly autism spectrum disorder. We introduce Be Trendy, a novel WIVR application that utilizes the benefits of immersive virtual reality to improve and challenge the cognitive capabilities of children such as learning, attention-span, memory and social skills. Two significant features of this application, modularity and remoteness, are highlighted and we assess its value and how it may be embedded with current NDD interventions. In this paper we evaluate the current state of art, present our solution and finally we offer some suggestions for how this project can be taken further.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129519767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating body scanning solutions into virtual dressing rooms","authors":"Francesco Sapio, Andrea Marrella, T. Catarci","doi":"10.1145/3206505.3206589","DOIUrl":"https://doi.org/10.1145/3206505.3206589","url":null,"abstract":"The world is entering its 4th Industrial Revolution, a new era of manufacturing characterized by ubiquitous digitization and computing. One industry to benefit and grow from this revolution is the fashion industry, in which Europe (and Italy in particular) has long maintained a global lead. To evolve with the changes in technology, we developed the IT-SHIRT project. In the context of this project, a key challenge relies on developing a virtual dressing room in which the final users (customers) can virtually try different clothes on their bodies. In this paper, we tackle the aforementioned issue by providing a critical analysis of the existing body scanning solutions, identifying their strengths and weaknesses towards their integration within the pipeline of virtual dressing rooms.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131164462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mobile devices for interaction in immersive virtual environments","authors":"Paulo Dias, Luis Afonso, Sérgio Eliseu, B. Santos","doi":"10.1145/3206505.3206526","DOIUrl":"https://doi.org/10.1145/3206505.3206526","url":null,"abstract":"Gamepads and 3D controllers are the main controllers used in most Virtual Environments. Despite being simple to use, these input devices have a number of limitations as fixed layout and difficulty to remember the mapping between buttons and functions. Mobile devices present interesting characteristics that might be valuable in immersive environments: more flexible interfaces, touchscreen combined with onboard sensors that allow new interaction and easy acceptance since these devices are used daily by most users. The work described in this article proposes a solution that uses mobile devices to interact with Immersive Virtual Environments for selection and navigation tasks. The proposed solution uses the mobile device camera to track the Head-Mounted-Display position and present a virtual representation of the mobile device screen; it was tested using an Immersive Virtual Museum as use case. Based on this prototype, a study was performed to compare controller based and mobile based interaction for navigation and selection showing that using mobile devices is viable in this context and offers interesting interaction opportunities.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114014098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Vasilchenko, Adriana Wilde, Stephen Snow, Madeline Balaam, Marie Devlin
{"title":"Video coursework: opportunity and challenge for HCI education","authors":"A. Vasilchenko, Adriana Wilde, Stephen Snow, Madeline Balaam, Marie Devlin","doi":"10.1145/3206505.3206596","DOIUrl":"https://doi.org/10.1145/3206505.3206596","url":null,"abstract":"Human-Computer Interaction (HCI) is a challenging subject to study due to its highly multidisciplinary nature and the fast change of advancing technology. Keeping pace with these changes requires innovation in pedagogical approach, such as student-authored video, which is presented here. In case studies from two UK universities, students were assessed on video making. The results suggest increased student engagement and satisfaction, as well as acquisition of design skills taught in HCI, not typically taught elsewhere in computer science. Here we share our experiences of using this practice along with key challenges and some preliminary findings from analysis of the student artefact-creation process. We also outline future research directions in this space.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124828915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}