{"title":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","authors":"","doi":"10.1145/3206505","DOIUrl":"https://doi.org/10.1145/3206505","url":null,"abstract":"","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121924877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ehsan Sotoodeh Mollashahi, Md. Sami Uddin, C. Gutwin
{"title":"Improving revisitation in long documents with two-level artificial-landmark scrollbars","authors":"Ehsan Sotoodeh Mollashahi, Md. Sami Uddin, C. Gutwin","doi":"10.1145/3206505.3206554","DOIUrl":"https://doi.org/10.1145/3206505.3206554","url":null,"abstract":"Document readers with linear navigation controls do not work well when users need to navigate to previously-visited locations, particularly when documents are long. Existing solutions - bookmarks, search, history, and read wear - are valuable but limited in terms of effort, clutter, and interpretability. In this paper, we investigate artificial landmarks as a way to improve support for revisitation in long documents - inspired by visual augmentations seen in physical books such as coloring on page edges or indents cut into pages. We developed several artificial-landmark visualizations that can represent locations even in documents that are many hundreds of pages long, and tested them in studies where participants visited multiple locations in long documents. Results show that providing two columns of landmark icons led to significantly better performance and user preference. Artificial landmarks provide a new mechanism to build spatial memory of long documents - and can be used either alone or with existing techniques like bookmarks, read wear, and search.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130446832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Behind-the-mask: a face-through head-mounted display","authors":"J. Rekimoto, Keishiro Uragaki, K. Yamada","doi":"10.1145/3206505.3206544","DOIUrl":"https://doi.org/10.1145/3206505.3206544","url":null,"abstract":"A head-mounted display (HMD), which is common in virtual reality (VR) systems, normally hides the user's face. This feature prohibits to realize a face-to-face communication, in which two or more users share the same virtual space, or show a participant's face on a surrogate-robot's face when the user remotely connects to the robot through an HMD for tele-immersion. Considering that face-to-face communication is one of the fundamental requirements of real-time communications, and is widely realized and used by many nonVR telecommunication systems, an HMD's face hiding feature is considered to be a serious problem and limits the possibility of VR. To address this issue, we propose the notion of \"Face-through HMD\" and present a face-capturing HMD configuration called \"Behind-the-Mask\" with infrared (IR) cut filters and side cameras that can be attached to existing HMDs. As an IR cut filter only reflects infrared light and transmits visible light, it is transparent to the user's eye but reflects the user's face with infrared lights. By merging a prescanned 3D face model of the user with the face image obtained from our HMD, the 3D face model of the user with eyes and mouth movement can be reconstructed. We consider that our proposed HMD can be used in many VR applications.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"81 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130997441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ITA.WA.: 1st Italian Visualization & Visual Analytics Workshop","authors":"M. Angelini, G. Santucci","doi":"10.1145/3206505.3206601","DOIUrl":"https://doi.org/10.1145/3206505.3206601","url":null,"abstract":"Data-driven approaches to problem solving and data analysis are becoming more and more important problems to consider and on which apply research ideas. In this respect, the capability to explore data, understands how algorithmic approaches work and steer them toward the desired goals make Visualization and Visual Analytics strong research fields in which to invest efforts. While this importance has been understood by several countries (e.g., USA, Germany, France) in Italy the research efforts in these fields are still disjointed. With the first edition of the ITA.WA. (Italian Visualization & Visual Analytics) workshop the goal is to make a step toward the creation of an Italian research community on these topics, allowing identification of research direction, joining forces in achieving them and developing common guidelines and programs for teaching activities.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134249556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual insight of spatiotemporal IoT-generated contents","authors":"Jun Lee, Kyoung-Sook Kim, Ryong Lee, Sanghwan Lee","doi":"10.1145/3206505.3206575","DOIUrl":"https://doi.org/10.1145/3206505.3206575","url":null,"abstract":"The rapid evolution of the Internet of Things (IoT) and Big Data technology has been generating a large amount and variety of sensing contents, including numeric measured values (e.g., timestamps, geolocations, or sensor logs) and multimedia (e.g., images, audios, and videos). In analyzing and understanding heterogeneous types of IoT-generated contents better, data visualization is an essential component of exploratory data analyses to facilitate information perception and knowledge extraction. This study introduces a holistic approach of storing, processing, and visualizing IoT-generated contents to support context-aware spatiotemporal insight by combining deep learning techniques with a geographical map interface. Visualization is provided under an interactive web-based user interface to help the an efficient visual exploration considering both time and geolocation by easy spatiotemporal query user interface1.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116886092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Kammer, Mandy Keck, A. Both, Giulio Jacucci, Rainer Groh
{"title":"VisBIA 2018: workshop on Visual Interfaces for Big Data Environments in Industrial Applications","authors":"D. Kammer, Mandy Keck, A. Both, Giulio Jacucci, Rainer Groh","doi":"10.1145/3206505.3206603","DOIUrl":"https://doi.org/10.1145/3206505.3206603","url":null,"abstract":"Industrial applications can benefit considerably from the overwhelming amount of still growing resources such as websites, images, texts, and videos that the internet offers today. The resulting Big Data Problem does not only consist of handling this immense volume of data. Moreover, data needs to be processed, cleaned, and presented in a user-friendly, intuitive, and interactive way. This workshop addresses visualization and user interaction challenges posed by the four V's: Volume (huge data amounts in the range of tera and petabytes), Velocity (the speed in which data is created, processed, and analysed), Variety (the different heterogeneous data types, sources, and formats), and Veracity (authenticity and validity of data). Big Data driven interfaces combine suitable backend and frontend technologies as well as automatic and semi-automatic approaches in order to analyze data in various business contexts. An important aspect is human intervention in developing and training data-driven applications (human in the loop). Our focus is on Visual Big Data Interfaces in industrial contexts such as e-commerce, e-learning and business intelligence. We address interfaces for three important user groups: data scientists, data workers and end users.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"495 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117121159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. D. Carolis, Cristina Gena, T. Kuflik, A. Origlia, G. Raptis
{"title":"AVI-CH 2018: Advanced Visual Interfaces for Cultural Heritage","authors":"B. D. Carolis, Cristina Gena, T. Kuflik, A. Origlia, G. Raptis","doi":"10.1145/3206505.3206597","DOIUrl":"https://doi.org/10.1145/3206505.3206597","url":null,"abstract":"Cultural Heritage (CH) is a challenging domain of application for novel Information and Communication Technologies (ICT), where visualization plays a major role in enhancing visitors' experience, either onsite or online. Technology-supported natural human-computer interaction is a key factor in enabling access to CH assets. Advances in ICT ease visitors to access collections online and better experience CH onsite. The range of visualization devices - from tiny smart watch screens and wall-size large situated public displays to the latest generation of immersive head-mounted displays - together with the increasing availability of real-time 3D rendering technologies for online and mobile devices and, recently, Internet of Things (IoT) approaches, require exploring how they can be applied successfully in CH. Following the successful workshop at AVI 2016 and the large numbers of recent events and projects focusing on CH and, considering that 2018 has been declared the European Year of Cultural Heritage, the goal of the workshop is to bring together researchers and practitioners interested in presenting and discussing the potential use of state-of-the-art advanced visual interfaces in enhancing our daily CH experience.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125832603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyosuke Tanaka, Naoya Tochihara, Toshiki Sato, H. Koike
{"title":"A real-time image processing framework with an aerial overhead camera for sports","authors":"Kyosuke Tanaka, Naoya Tochihara, Toshiki Sato, H. Koike","doi":"10.1145/3206505.3206520","DOIUrl":"https://doi.org/10.1145/3206505.3206520","url":null,"abstract":"Recently, large horizontal interactive surfaces have begun to be developed. In these systems, an overhead camera is often used to detect the position of objects on the surface even if they are not in contact with the surface. However the issues that these systems face are that they are expensive and a camera cannot be easily attached on top of the surface in some situations. This paper proposes a framework that uses a camera on a drone (UAV) as an overhead camera unit to convert arbitrary horizontal rectangular regions into interactive surfaces. Although commercially available drones that are equipped with cameras have high latencies and are difficult to use in real-time interactive systems, we solved this latency issue using a small PC that performs primitive image processing tasks onboard. First, we describe a drone unit that has an infrared camera and a small PC for real-time image processing, such as surface detection and object detection. Second, we describe novel infrared markers for the robust detection of the four corners of a rectangular region and the objects within that region. Finally, we describe an interactive sports coaching application in which a drone unit is used as an overhead camera both for a large playing field and small tabletop.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127359319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Ardissono, Matteo Delsanto, M. Lucenteforte, Noemi Mauro, Adriano Savoca, Daniele Scanu
{"title":"Transparency-based information filtering on 2D/3D geographical maps","authors":"L. Ardissono, Matteo Delsanto, M. Lucenteforte, Noemi Mauro, Adriano Savoca, Daniele Scanu","doi":"10.1145/3206505.3206566","DOIUrl":"https://doi.org/10.1145/3206505.3206566","url":null,"abstract":"The presentation of search results in GIS can expose the user to cluttered geographical maps, challenging the identification of relevant information. In order to address this issue, we propose a visualization model supporting interactive information filtering on 2D/3D maps. Our model is based on the introduction of transparency sliders that enable the user to tune the opacity, and thus the emphasis, of data categories in the map. In this way, he or she can focus the maps on the most relevant types of information for the task to be performed. A test with users provided positive results concerning the efficacy of our model.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126383855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrés Santos, Telmo Zarraonandia, P. Díaz, I. Aedo
{"title":"A virtual reality map interface for geographical information systems","authors":"Andrés Santos, Telmo Zarraonandia, P. Díaz, I. Aedo","doi":"10.1145/3206505.3206580","DOIUrl":"https://doi.org/10.1145/3206505.3206580","url":null,"abstract":"The Virtual Reality (VR) technology offers new possibilities to implement Geographical Information Systems (GIS), allowing the user to visualize and interact with map interfaces in more natural and immersive way. Moreover, they can provide the user with a wide field of view similar to the one obtained when using large displays. In this paper, we present a VR application that allows the user to display and interact with a GIS map using different types of interaction modalities.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116585695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}