{"title":"The SeismoCloud App: Your Smartphone as a Seismometer","authors":"Emanuele Panizzi","doi":"10.1145/2909132.2926070","DOIUrl":"https://doi.org/10.1145/2909132.2926070","url":null,"abstract":"We designed and developed an app, for iOS and Android, which uses internal device accelerometer to detect earthquakes that may occur while the smartphone is stable on a at surface and to deliver a crowdsourced early warning to users in the region where the earthquake might be dangerous. We describe our interface for the Android operating system and we compare our system to the other main research work.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126571064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fanny Chevalier, N. Riche, C. Plaisant, Amira Chalbi, C. Hurter
{"title":"Animations 25 Years Later: New Roles and Opportunities","authors":"Fanny Chevalier, N. Riche, C. Plaisant, Amira Chalbi, C. Hurter","doi":"10.1145/2909132.2909255","DOIUrl":"https://doi.org/10.1145/2909132.2909255","url":null,"abstract":"Animations are commonplace in today's user interfaces. From bouncing icons that catch attention, to transitions helping with orientation, to tutorials, animations can serve numerous purposes. We revisit Baecker and Small's pioneering work Animation at the Interface, 25 years later. We reviewed academic publications and commercial systems, and interviewed 20 professionals of various backgrounds. Our insights led to an expanded set of roles played by animation in interfaces today for keeping in context, teaching, improving user experience, data encoding and visual discourse. We illustrate each role with examples from practice and research, discuss evaluation methods and point to opportunities for future research. This expanded description of roles aims at inspiring the HCI research community to find novel uses of animation, guide them towards evaluation and spark further research.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124845438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing Topic Representations for Gist-Forming","authors":"E. Alexander, Michael Gleicher","doi":"10.1145/2909132.2909252","DOIUrl":"https://doi.org/10.1145/2909132.2909252","url":null,"abstract":"As topic modeling has grown in popularity, tools for visualizing the process have become increasingly common. Though these tools support a variety of different tasks, they generally have a view or module that conveys the contents of an individual topic. These views support the important task of gist-forming: helping the user build a cohesive overall sense of the topic's semantic content that can be generalized outside the specific subset of words that are shown. There are a number of factors that affect these views, including the visual encoding used, the number of topic words included, and the quality of the topics themselves. To our knowledge, there has been no formal evaluation comparing the ways in which these factors might change users' interpretations. In a series of crowdsourced experiments, we sought to compare features of visual topic representations in their suitability for gist-forming. We found that gist-forming ability is remarkably resistant to changes in visual representation, though it deteriorates with topics of lower quality.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120952534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ulrich von Zadow, Patrick Reipschläger, Daniel Bösel, A. Sellent, Raimund Dachselt
{"title":"YouTouch! Low-Cost User Identification at an Interactive Display Wall","authors":"Ulrich von Zadow, Patrick Reipschläger, Daniel Bösel, A. Sellent, Raimund Dachselt","doi":"10.1145/2909132.2909258","DOIUrl":"https://doi.org/10.1145/2909132.2909258","url":null,"abstract":"We present YouTouch!, a system that tracks users in front of an interactive display wall and associates touches with users. With their large size, display walls are inherently suitable for multi-user interaction. However, current touch recognition technology does not distinguish between users, making it hard to provide personalized user interfaces or access to private data. In our system we place a commodity RGB + depth camera in front of the wall, allowing us to track users and correlate them with touch events. While the camera's driver is able to track people, it loses the user's ID whenever she is occluded or leaves the scene. In these cases, we re-identify the person by means of a descriptor comprised of color histograms of body parts and skeleton-based biometric measurements. Additional processing reliably handles short-term occlusion as well as assignment of touches to occluded users. YouTouch! requires no user instrumentation nor custom hardware, and there is no registration nor learning phase. Our system was thoroughly tested with data sets comprising 81 people, demonstrating its ability to re-identify users and correlate them to touches even under adverse conditions.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130300137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Colombo, F. Garzotto, M. Gelsomini, Mattia Melli, Francesco Clasadonte
{"title":"Dolphin Sam: A Smart Pet for Children with Intellectual Disability","authors":"S. Colombo, F. Garzotto, M. Gelsomini, Mattia Melli, Francesco Clasadonte","doi":"10.1145/2909132.2926090","DOIUrl":"https://doi.org/10.1145/2909132.2926090","url":null,"abstract":"Our research aims at helping children with intellectual disability (ID) to \"learn through play\" by interacting with digitally enriched physical toys. Inspired by the practice of Dolphin Therapy (a special form of Pet Therapy) and, specifically, by the activities that ID children perform at Dolphinariums, we have developed a \"smart\" stuffed dolphin called SAM that engages children in a variety of play tasks. SAM emits different stimuli (sound, vibration, and light) with its body in response to children's manipulation. Its behavior is integrated with lights and multimedia animations or video displayed in the ambient and can be customized by therapists to address the specific needs of each child.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"1241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130734651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Ardito, Giuseppe Desolda, M. Matera, M. Costabile
{"title":"Exploiting Visual Notations for Data Exploration: The EFESTO Platform","authors":"C. Ardito, Giuseppe Desolda, M. Matera, M. Costabile","doi":"10.1145/2909132.2926082","DOIUrl":"https://doi.org/10.1145/2909132.2926082","url":null,"abstract":"The EFESTO platform allows the creation of interactive workspaces supporting end users in the exploration and seamless composition of heterogeneous data sources. By means of a visual paradigm implemented within a Web composition environment, the end users dynamically create \"live\" mashups where relevant information, extracted from different types of data sources including the Linked Open Data, and functions that can be performed on it can be flexibly shaped-up at runtime.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133852288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diogo Cabral, João M. F. Silva, Carla Fernandes, N. Correia
{"title":"Annotating Live Video with Tablet Computers: A Preliminary User Study","authors":"Diogo Cabral, João M. F. Silva, Carla Fernandes, N. Correia","doi":"10.1145/2909132.2926069","DOIUrl":"https://doi.org/10.1145/2909132.2926069","url":null,"abstract":"Tablet computers provide a natural support for digital annotation associated to any type of media, including live video. However, annotations of live video require real-time motion tracking, maintaining the association between annotations and moving objects, as well as interfaces that facilitate the annotation task of a live event. This work presents the initial users' feedback on a video annotator that includes two real-time trackers, Kinect and TLD, and on two annotation methods that aim to help annotating live video, \"Hold and Overlay\" and \"Hold and Speed Up\".","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130165137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Treemaps and the Visual Comparison of Hierarchical Multi-Attribute Data","authors":"K. Wittenburg, Tommaso Turchi","doi":"10.1145/2909132.2909286","DOIUrl":"https://doi.org/10.1145/2909132.2909286","url":null,"abstract":"Treemaps have the desirable property of presenting overviews along with details of data and thus are of interest in visualizations of multi-attribute tabular data with attribute hierarchies. However, the original treemap algorithms and most subsequent variations are hampered in making parallel structures in a hierarchical data structure visually comparable. Structurally parallel elements are not aligned, making it difficult to compare them visually. We propose a method that allows for proportional and non-proportional subdivisions of subtrees while preserving visual alignment of parallel structures. We extend the framework so that other types of data visualizations can be placed within the graphical areas of a treemap to allow for the visual comparison of a broad collection of data types including temporal data.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115051957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual Synchronization of Music Notation in Ancient Choir-Books","authors":"A. Celentano, Luigi Collarile","doi":"10.1145/2909132.2926077","DOIUrl":"https://doi.org/10.1145/2909132.2926077","url":null,"abstract":"This paper explores some techniques for visually synchronizing the music notation in ancient choir-books in which, differently from modern music scores, the different voices are independently placed in the four quadrants of an open volume large page. We propose three visual techniques to show the synchronization between the different voices in comparison with a modern music notation.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122127918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multitouch Radial Menu Integrating Command Selection and Control of Arguments with up to 4 Degrees of Freedom","authors":"Shrey Gupta, Michael J. McGuffin","doi":"10.1145/2909132.2909266","DOIUrl":"https://doi.org/10.1145/2909132.2909266","url":null,"abstract":"We design and evaluate a multitouch radial menu for large screens with two desirable properties. First, it allows a single gesture to select a command and then continuously control arguments for that command with unbroken kinesthetic tension. Second, arguments are controlled with 1 or 2 fingers for up to 4 degrees of freedom (DoF). For example, the user may select one command for 4 DoF direct manipulation (translation + scaling + rotation), or another command for 3 DoF camera operations (pan + zoom), using the same two-finger pinch gesture, but with different initial orientations of the gesture to disambiguate. We present a taxonomy to classify previous menuing techniques sharing the first property, and discuss how very few techniques have both of these properties. Our work also extends previous work by Banovic et al. in the following ways: our menu supports submenus and a fast default command, and we experimentally evaluate the effect of varying the number of rings in the menu, the symmetry of the menu, and the use of one hand vs. two hands vs. a stylus and hand.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126021801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}