Stefanie Klum, Petra Isenberg, R. Langner, Jean-Daniel Fekete, Raimund Dachselt
{"title":"Stackables","authors":"Stefanie Klum, Petra Isenberg, R. Langner, Jean-Daniel Fekete, Raimund Dachselt","doi":"10.1145/2212776.2212391","DOIUrl":"https://doi.org/10.1145/2212776.2212391","url":null,"abstract":"We demonstrate Stackables, tangible widgets designed for individual and collaborative faceted browsing. In contrast, current interfaces for browsing and search in large data spaces largely focus on supporting either individual or collaborative activities. Each stackable facet token represents search parameters that can be shared amongst collaborators, modified, and stored. We show how individuals or multiple people can interact with Stackables and combine them to formulate queries on realistic datasets. We have successfully used and evaluated Stackables in a user study with a dataset of over 1500 books and 12 facets with ranges of thousands of facet values.","PeriodicalId":216901,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces - AVI '12","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123557813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SpeeG","authors":"Lode Hoste, Bruno Dumas, B. Signer","doi":"10.1145/2254556.2254585","DOIUrl":"https://doi.org/10.1145/2254556.2254585","url":null,"abstract":"We present SpeeG, a multimodal speech- and body gesture-based text input system targeting media centres, set-top boxes and game consoles. Our controller-free zoomable user interface combines speech input with a gesture-based real-time correction of the recognised voice input. While the open source CMU Sphinx voice recogniser transforms speech input into written text, Microsoft's Kinect sensor is used for the hand gesture tracking. A modified version of the zoomable Dasher interface combines the input from Sphinx and the Kinect sensor. In contrast to existing speech error correction solutions with a clear distinction between a detection and correction phase, our innovative SpeeG text input system enables continuous real-time error correction. An evaluation of the SpeeG prototype has revealed that low error rates for a text input speed of about six words per minute can be achieved after a minimal learning phase. Moreover, in a user study SpeeG has been perceived as the fastest of all evaluated user interfaces and therefore represents a promising candidate for future controller-free text input.","PeriodicalId":216901,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces - AVI '12","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126911044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
X. Chen, Sebastian Boring, Sheelagh Carpendale, Anthony Tang, S. Greenberg
{"title":"Spalendar","authors":"X. Chen, Sebastian Boring, Sheelagh Carpendale, Anthony Tang, S. Greenberg","doi":"10.1145/2254556.2254686","DOIUrl":"https://doi.org/10.1145/2254556.2254686","url":null,"abstract":"Portable paper calendars (i. e., day planners and organizers) have greatly influenced the design of group electronic calendars. Both use time units (hours/days/weeks/etc.) to organize visuals, with useful information (e.g., event types, locations, attendees) usually presented as - perhaps abbreviated or even hidden - text fields within those time units. The problem is that, for a group, this visual sorting of individual events into time buckets conveys only limited information about the social network of people. For example, people's whereabouts cannot be read 'at a glance' but require examining the text. Our goal is to explore an alternate visualization that can reflect and illustrate group members' calendar events. Our main idea is to display the group's calendar events as spatiotemporal activities occurring over a geographic space animated over time, all presented on a highly interactive public display. In particular, our Spalendar (Spatial Calendar) design animates people's past, present and forthcoming movements between event locations as well as their static locations. Detail of people's events, their movements and their locations is progressively revealed and controlled by the viewer's proximity to the display, their identity, and their gestural interactions with it, all of which are tracked by the public display.","PeriodicalId":216901,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces - AVI '12","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131107919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}