{"title":"Simple is Good: Observations of Visualization Use Amongst the Big Data Digerati","authors":"D. Russell","doi":"10.1145/2909132.2933287","DOIUrl":"https://doi.org/10.1145/2909132.2933287","url":null,"abstract":"While modern information visualization (IV) has been around for several decades, the inventions of IV seem to be peripheral to the everyday work in companies that would seem to be the most likely to use these inventions. In this case study, Google uses very few IV tools, relying mostly on more traditional ways of looking at data and data relationships. What has brought about this state of affairs? An analysis shows that the basic causes of low adoption are (a) difficulty of data wrangling and sharing the work products of analysis, (b) the need to share a common visual language literacy across different parts of the organization, (c) problems in using IV tools to communicate and present complex data analyses. At the same time, IV technology is found to be more useful in the investigation phase of research, rather than for communication and presentation reasons.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128740435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NetFork: Mapping Time to Space in Network Visualization","authors":"V. Donato, M. Patrignani, Claudio Squarcella","doi":"10.1145/2909132.2909245","DOIUrl":"https://doi.org/10.1145/2909132.2909245","url":null,"abstract":"Dynamic network visualization aims at representing the evolution of relational information in a readable, scalable, and effective way. A natural approach, called 'time-to-time mapping', consists of computing a representation of the network at each time step and animating the transition between subsequent time steps. However, recent literature recommends to represent time-related events by means of static graphic counterparts, realizing the so called 'time-to-space mapping'. This paradigm has been successfully applied to networks where nodes and edges are subject to a restricted set of events: appearances, disappearances, and attribute changes. In this paper we describe NetFork, a system that conveys the timings and the impact of path changes that occur in a routing network by suitable time-to-space metaphors, without relying on the time-to-time mapping adopted by the play-back interfaces of alternative network monitoring tools. A user study and a comparison with the state of the art show that users can leverage on high level static representations to quickly assess the quantity and quality of the path dynamics that took place in the network.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128774351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AmI@Home: A Game-Based Collaborative System for Smart Home Configuration","authors":"D. Fogli, R. Lanzilotti, A. Piccinno, Paolo Tosi","doi":"10.1145/2909132.2926083","DOIUrl":"https://doi.org/10.1145/2909132.2926083","url":null,"abstract":"This paper describes AmI@Home, a collaborative system prototype for smart home management and configuration. In particular, the system is based on event-condition-action rules. Rule construction and manipulation occur through gamification mechanisms supporting social interaction, collaboration and competition, in order to engage all family members in shaping their smart home.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126920955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonella Di Rienzo, Paolo Tagliaferri, Francesco Arenella, F. Garzotto, C. Frà, P. Cremonesi, M. Valla
{"title":"Bridging Physical Space and Digital Landscape to Drive Retail Innovation","authors":"Antonella Di Rienzo, Paolo Tagliaferri, Francesco Arenella, F. Garzotto, C. Frà, P. Cremonesi, M. Valla","doi":"10.1145/2909132.2926087","DOIUrl":"https://doi.org/10.1145/2909132.2926087","url":null,"abstract":"This paper describes a contemporary concept store, offering a technology-rich blend of entertainment and interactivity targeted to help customers in their shopping experiences while shortening the time they waste queuing.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121544761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Prisco, Delfina Malandrino, Gianluca Zaccagnino, R. Zaccagnino
{"title":"Natural User Interfaces to Support and Enhance Real-Time Music Performance","authors":"R. Prisco, Delfina Malandrino, Gianluca Zaccagnino, R. Zaccagnino","doi":"10.1145/2909132.2909249","DOIUrl":"https://doi.org/10.1145/2909132.2909249","url":null,"abstract":"Today's technology is redefining the way individuals can work, communicate, share experiences, constructively debate, and actively participate to any aspect of the daily life, ranging from business to education, from political and intellectual to social, and so on. Enabling access to technology by any individual, reducing obstacles, avoiding discrimination, and making the overall experience easier and enjoyable is an important objective of both research and industry. Exploiting natural user interfaces, initially conceived for the game market, it is possible to enhance the traditional modalities of interaction when accessing to technology, build new forms of interactions by transporting users in a virtual dimension, but that fully reflects the reality, and finally, improve the overall perceived experience. The increasing popularity of these innovative interfaces involved their adoption in other fields, including Computer Music. This paper presents MarcoSmiles, a system designed to allow individuals to perform music in a easy, innovative, and personalized way. The idea is to design new interaction modalities during music performances by using hands without the support of a real musical instrument. We exploited Artificial Neural Networks to customize the virtual musical instrument, to provide the information for the mapping of the hands configurations into musical notes and, finally, to train and test these configurations. We studied the behavior of the system and its efficacy in terms of learning capabilities. We also report results about a preliminary evaluation study aimed at analyze general users' perceptions about the system and their overall satisfaction.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122840903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Study on Developing the Interface of Mobile E-learning Application for Children's Foreign Language Education","authors":"Dongwann Kang, K. Yoon","doi":"10.1145/2909132.2926074","DOIUrl":"https://doi.org/10.1145/2909132.2926074","url":null,"abstract":"In this paper, we present a smartphone based e-book application with interactive illustration authoring tool. The target readers of our e-book are children who study foreign language. Our application aims to educate preschool age children's foreign language by telling folktales with illustrations. To encourage effective learning, our application provides a tool that enables the user to create collage-based illustrations on the application by hand. In this function, the reader can make one's own illustration by employing colored paper collage style. We implement the application on Android-based smartphone.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134639592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Javad Sadeghi, Charles Perin, Tamara Flemisch, Mark S. Hancock, Sheelagh Carpendale
{"title":"Flexible Trees: Sketching Tree Layouts","authors":"Javad Sadeghi, Charles Perin, Tamara Flemisch, Mark S. Hancock, Sheelagh Carpendale","doi":"10.1145/2909132.2909274","DOIUrl":"https://doi.org/10.1145/2909132.2909274","url":null,"abstract":"We introduce Flexible Trees, a sketch-based layout adjustment technique. Although numerous tree layout algorithms exist, these algorithms are usually bound to fit within standard shapes such as rectangles, circles and triangles. In order to provide the possibility of interactively customizing a tree layout, we offer a free-form sketch-based interaction through which one can re-define the boundary constraints for the tree layouts by combining ray-line intersection and line segment intersection. Flexible Trees offer topology preserving adjustments; can be used with a variety of tree layouts; and offer a simple way of authoring tree layouts for infographic purposes.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121072042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Novel Indirect Touch Input Techniques Applied to Finger-Forming 3D Models","authors":"H. Palleis, Julie Wagner, H. Hussmann","doi":"10.1145/2909132.2909257","DOIUrl":"https://doi.org/10.1145/2909132.2909257","url":null,"abstract":"We address novel two-handed interaction techniques in dual display interactive workspaces combining direct and indirect touch input. In particular, we introduce the notion of a horizontal tool space with task-dependent graphical input areas. These input areas are designed as single purpose control elements for specific functions and allow users to manipulate objects displayed on a vertical screen using simple one- and two-finger touch gestures and both hands. For demonstrating this concept, we use 3D modeling tasks as a specific application area. Initial feedback of six expert users indicates that our techniques are easy to use and stimulate exploration rather than precise modeling. Further, we gathered qualitative feedback during a multi-session observational study with five novices who learned to use our tool and were interviewed several times. Preliminary results indicate that working with our setup is easy to learn and remember. Participants liked the partitioning character of the dual-surface setup and agreed on the benefiting quality of touch input, giving them a 'hands-on feeling'.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128020108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Target Expansion Lens: It is Not the More Visual Feedback the Better!","authors":"Maxime Guillon, F. Leitner, L. Nigay","doi":"10.1145/2909132.2909265","DOIUrl":"https://doi.org/10.1145/2909132.2909265","url":null,"abstract":"To enhance pointing tasks, target expansion techniques allocate larger activation areas to targets. We distinguish two basic elements of a target expansion technique: the expansion algorithm and the visual aid on the effective expanded targets. We present a systematic analysis of the relevance of the visual aid provided by (1) existing target expansion techniques and (2) Expansion Lens. The latter is a new continuous technique for acquiring targets. Expansion Lens namely, uses a round area centered on the cursor: the lens. The users can see in the lens the target expanded area boundaries that the lens is hovering over. Expansion Lens serves as a magic lens revealing the underlying expansion algorithm. The design rationale of Expansion Lens is based on a systematic analysis of the relevance of the visual aid according to the three goal-oriented phases of a pointing task namely the starting, transfer and validation phases. Expansion Lens optimizes (1) the transfer phase by providing a simple-shaped visual aid centered on the cursor, and (2) the validation phase regarding error rates, by displaying the target expanded area boundaries. The results of our controlled experiment comparing Expansion Lens with four existing target expansion techniques show that Expansion Lens highlights a good trade-off for performance by being the less-error prone technique and the second fastest technique. The experimental data for each phase of the pointing task also confirm our design approach based on the relevance of the visual aid according to the phase of the pointing task.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126965610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supporting Singers with Tangible and Visual Feedback","authors":"Assunta Matassa, Fabio Morreale","doi":"10.1145/2909132.2926081","DOIUrl":"https://doi.org/10.1145/2909132.2926081","url":null,"abstract":"Most of musicians can control their performance by relying on different sensorial modalities that complement the auditory cue. Vision, in particular, offers most instrumentalists an essential support: it helps them developing techniques, identifying errors, correcting expressiveness, and memorise complex passages. By contrast, when performing a piece, singers can almost exclusively rely on the auditory feedback coming from their voice to adjust their singing. This paper frames this issue and proposes possible alternatives to improve singers' awareness by adding visual and tangible feedback to their performance.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116043433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}