M. Brehmer, Sheelagh Carpendale, Bongshin Lee, Melanie Tory
{"title":"Pre-design empiricism for information visualization: scenarios, methods, and challenges","authors":"M. Brehmer, Sheelagh Carpendale, Bongshin Lee, Melanie Tory","doi":"10.1145/2669557.2669564","DOIUrl":"https://doi.org/10.1145/2669557.2669564","url":null,"abstract":"Empirical study can inform visualization design, both directly and indirectly. Pre-design empirical methods can be used to characterize work practices and their associated problems in a specific domain, directly motivating design choices during the subsequent development of a specific application or technique. They can also be used to understand how individuals, existing tools, data, and contextual factors interact, indirectly informing later research in our community. Contexts for empirical study vary and practitioners should carefully consider finding the most appropriate methods for any given situation. This paper discusses some of the challenges associated with conducting pre-design studies by way of four illustrative scenarios, highlighting the methods as well as the challenges unique to the visualization domain. We encourage researchers and practitioners to conduct more pre-design empirical studies and describe in greater detail their use of empirical methods for informing design.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134192244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Crowdster: enabling social navigation in web-based visualization using crowdsourced evaluation","authors":"Yuet Ling Wong, N. Elmqvist","doi":"10.1145/2669557.2669567","DOIUrl":"https://doi.org/10.1145/2669557.2669567","url":null,"abstract":"Evaluation is typically seen as a validation tool for visualization, but the proliferation of web-based visualization is enabling a radical new approach that uses crowdsourced evaluation for emergent collaboration where one user's efforts facilitate a crowd of future users. The idea is simple: instead of using clickstreams, keyboard input, and interaction logs to collect performance metrics for individual participants in a user study, the interaction data is aggregated from the running visualization, integrated back into the visual representation, and then the new interaction data is collected and evaluated with the old data. Known as social navigation, this enables users to build on the work of previous users, for example by seeing collective annotations, the most commonly selected data points, and the most popular locations on the visual space. However, while web-based visualizations by definition are distributed using a web server, most do not maintain the server-side database connections and aggregation mechanisms to achieve this. To bridge this gap between social navigation, its evaluation and visualization, we present Crowdster, a framework that supports capturing, aggregating, and visualizing user interaction data. We give three examples to showcase the Crowdster framework: a Google Maps app that shows the navigation trails of previous users, a scatterplot matrix that visualizes a density distribution of the most selected data points, and a node-link visualization that supports collective graph layout.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121223425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Kurzhals, Cyrill Fabian Bopp, Jochen Bässler, Felix Ebinger, D. Weiskopf
{"title":"Benchmark data for evaluating visualization and analysis techniques for eye tracking for video stimuli","authors":"K. Kurzhals, Cyrill Fabian Bopp, Jochen Bässler, Felix Ebinger, D. Weiskopf","doi":"10.1145/2669557.2669558","DOIUrl":"https://doi.org/10.1145/2669557.2669558","url":null,"abstract":"For the analysis of eye movement data, an increasing number of analysis methods have emerged to examine and analyze different aspects of the data. In particular, due to the complex spatio-temporal nature of gaze data for dynamic stimuli, there has been a need and recent trend toward the development of visualization and visual analytics techniques for such data. With this paper, we provide benchmark data to test visualization and visual analytics methods, but also other analysis techniques for gaze processing. In particular, for eye tracking data from video stimuli, existing datasets often provide few information about recorded eye movement patterns and, therefore, are not comprehensive enough to allow for a faithful assessment of the analysis methods. Our benchmark data consists of three ingredients: the dynamic stimuli in the form of video, the eye tracking data, and annotated areas of interest. We designed the video stimuli and the tasks for the participants of the eye tracking experiments in a way to trigger typical viewing patterns, including attentional synchrony, smooth pursuit, and switching of the focus of attention. In total, we created 11 videos with eye tracking data acquired from 25 participants.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122958618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gamification as a paradigm for the evaluation of visual analytics systems","authors":"Nafees Ahmed, K. Mueller","doi":"10.1145/2669557.2669574","DOIUrl":"https://doi.org/10.1145/2669557.2669574","url":null,"abstract":"The widespread web-based connectivity of people all over the world has yielded new opportunities to recruit humans for visual analytics evaluation and for an abundance of other tasks. Known as crowdsourcing, humans typically receive monetary incentives to participate. However, while these payments are small per evaluation, the cost can add up for realistically-sized studies. Furthermore, since the reward is money, the quality of the evaluation can suffer. Our approach uses radically different incentives, namely entertainment, pleasure, and the feeling of success. We propose a theory, methodology and framework that can allow any visual analytics researcher to turn his/her evaluation task into an entertaining online game. First experiences with a prototype have shown that such an approach allows ten-thousands of evaluations to be done in a matter of days at no cost which is completely unthinkable with conventional methods.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126369445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}