{"title":"Comparing benchmark task and insight evaluation methods on timeseries graph visualizations","authors":"Purvi Saraiya, Chris North, K. Duca","doi":"10.1145/2110192.2110201","DOIUrl":"https://doi.org/10.1145/2110192.2110201","url":null,"abstract":"A study to compare two different empirical research methods for evaluating visualization tools is described: the traditional benchmark-task method and the insight method. The methods were compared using different criteria such as: the conclusions about the visualization tools provided by each method, the time participants spent during the study, the time and effort required to analyze the resulting empirical data, and the effect of individual differences between participants on the results. The studies used three graph visualization alternatives to associate bioinformatics microarray timeseries data to pathway graph vertices, based on popular approaches used in existing bioinformatics software.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134328858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards information-theoretic visualization evaluation measure: a practical example for Bertin's matrices","authors":"I. Liiv","doi":"10.1145/2110192.2110196","DOIUrl":"https://doi.org/10.1145/2110192.2110196","url":null,"abstract":"This paper presents a discussion about matrix-based representation evaluation measures, including a review of related evaluation measures from different scientific disciplines and a proposal for promising approaches. The paper advocates linking or replacing a large portion of indefinable aesthetics with a mathematical framework and theory backed up by an incomputable function -- Kolmogorov complexity. A suitable information-theoretic evaluation measure is proposed together with a practical approximating implementation example for Bertin's Matrices.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116952077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing information graphics: a critical look at eye tracking","authors":"J. Goldberg, J. Helfman","doi":"10.1145/2110192.2110203","DOIUrl":"https://doi.org/10.1145/2110192.2110203","url":null,"abstract":"Effective graphics are essential for understanding complex information and completing tasks. To assess graphic effectiveness, eye tracking methods can help provide a deeper understanding of scanning strategies that underlie more traditional, high-level accuracy and task completion time results. Eye tracking methods entail many challenges, such as defining fixations, assigning fixations to areas of interest, choosing appropriate metrics, addressing potential errors in gaze location, and handling scanning interruptions. Special considerations are also required designing, preparing, and conducting eye tracking studies. An illustrative eye tracking study was conducted to assess the differences in scanning within and between bar, line, and spider graphs, to determine which graphs best support relative comparisons along several dimensions. There was excessive scanning to locate the correct bar graph in easier tasks. Scanning across bar and line graph dimensions before comparing across graphs was evident in harder tasks. There was repeated scanning between the same dimension of two spider graphs, implying a greater cognitive demand from scanning in a circle that contains multiple linear dimensions, than from scanning the linear axes of bar and line graphs. With appropriate task design and targeted analysis metrics, eye tracking techniques can illuminate visual scanning patterns hidden by more traditional time and accuracy results.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130426080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring information visualization: describing different interaction patterns","authors":"M. Pohl, Sylvia Wiltner, S. Miksch","doi":"10.1145/2110192.2110195","DOIUrl":"https://doi.org/10.1145/2110192.2110195","url":null,"abstract":"Interactive Information Visualization methods engage users in exploratory behavior. Detailed information about such processes can help developers to improve the design of such methods. The following study which is based on software logging describes patterns of such behavior in more detail. Subjects in our study engaged in some activities (e.g. adding data, changing form of visualization) significantly more than in others. They adapted their activity patterns to different tasks, but not fundamentally so. In addition, subjects adopted very systematic sequences of actions. These sequences were quite similar across the whole sample, thus indicating that such sequences might reflect specific problem solving behavior. Davidson's [7] framework of problem solving behavior is used to interpret the results. More research is necessary to show whether similar interaction patterns can be found for the usage of other InfoVis methodologies as well.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129505105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating a synthetic video dataset","authors":"M. Whiting, J. Haack, Carrie Varley","doi":"10.1145/2110192.2110199","DOIUrl":"https://doi.org/10.1145/2110192.2110199","url":null,"abstract":"A synthetic video dataset, scenario, and task were included in the 2009 VAST Challenge, to allow participants an opportunity to demonstrate visual analytic tool use on video data. This is the first time a video challenge had been presented as part of the VAST contest and provided interesting challenges in task and dataset development, video analytic tool development, and metrics for judging entries. We describe the considerations and requirements for generation of a usable challenge, the video creation itself, and some submissions and assessments from that mini-challenge.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123597136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Guiard, M. Beaudouin-Lafon, Yangzhou Du, Caroline Appert, Jean-Daniel Fekete, O. Chapuis
{"title":"Shakespeare's complete works as a benchmark for evaluating multiscale document navigation techniques","authors":"Y. Guiard, M. Beaudouin-Lafon, Yangzhou Du, Caroline Appert, Jean-Daniel Fekete, O. Chapuis","doi":"10.1145/1168149.1168165","DOIUrl":"https://doi.org/10.1145/1168149.1168165","url":null,"abstract":"In this paper, we describe an experimental platform dedicated to the comparative evaluation of multiscale electronic-document navigation techniques. One noteworthy characteristic of our platform is that it allows the user not only to translate the document (for example, to pan and zoom) but also to tilt the virtual camera to obtain freely chosen perspective views of the document. Second, the platform makes it possible to explore, with semantic zooming, the 150,000 verses that comprise the complete works of William Shakespeare. We argue that reaching and selecting one specific verse in this very large text corpus amounts to a perfectly well defined Fitts task, leading to rigorous assessments of target acquisition performance. For lack of a standard, the various multiscale techniques that have been reported recently in the literature are difficult to compare. We recommend that Shakespeare's complete works, converted into a single document that can be zoomed both geometrically and semantically, be used as a benchmark to facilitate systematic experimental comparisons, using Fitts' target acquisition paradigm.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"227 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123270784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eliane Regina de Almeida Valiati, M. Pimenta, C. Freitas
{"title":"A taxonomy of tasks for guiding the evaluation of multidimensional visualizations","authors":"Eliane Regina de Almeida Valiati, M. Pimenta, C. Freitas","doi":"10.1145/1168149.1168169","DOIUrl":"https://doi.org/10.1145/1168149.1168169","url":null,"abstract":"The design of multidimensional visualization techniques is based on the assumption that a graphical representation of a large dataset can give more insight to a user, by providing him/her a more intuitive support in the process of exploiting data. When developing a visualization technique, the analytic and exploratory tasks that a user might need or want to perform on the data should guide the choice of the visual and interaction metaphors implemented by the technique. Usability testing of visualization techniques also needs the definition of users' tasks. The identification and understanding of the nature of the users' tasks in the process of acquiring knowledge from visual representations of data is a recent branch in information visualization research. Some works have proposed taxonomies to organize tasks that a visualization technique should support. This paper proposes a taxonomy of visualization tasks, based on existing taxonomies as well as on the observation of users performing exploratory tasks in a multidimensional data set using two different visualization techniques, Parallel Coordinates and RadViz. Different scenarios involving low-level tasks were estimated for the completion of some high-level tasks, and they were compared to the scenarios observed during the users' experiments.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129702357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Methods for the evaluation of an interactive InfoVis tool supporting exploratory reasoning processes","authors":"Markus Rester, M. Pohl","doi":"10.1145/1168149.1168156","DOIUrl":"https://doi.org/10.1145/1168149.1168156","url":null,"abstract":"Developing Information Visualization (InfoVis) techniques for complex knowledge domains makes it necessary to apply alternative methods of evaluation. In the evaluation of Gravi++ we used several methods and studied different user groups. We developed a reporting system yielding data about the insights the subjects gained during the exploration. It provides complex information about subjects' reasoning processes. Log files are valuable for time-dependent analysis of cognitive strategies. Focus groups provide a different view on the process of gaining insights. We assume that our experiences with all these methods can also be applied in similar evaluation studies on InfoVis techniques for complex data.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116181849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating information visualisations","authors":"K. Andrews","doi":"10.1145/1168149.1168151","DOIUrl":"https://doi.org/10.1145/1168149.1168151","url":null,"abstract":"As more experience is being gained with the evaluation of information visualisation interfaces, weaknesses in current evaluation practice are coming to the fore.This position paper presents an overview of currently used evaluation methods, followed by a discussion of my experiences and lessons learned from a series of studies comparing hierarchy browsers.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116813941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating information visualization applications with focus groups: the CourseVis experience","authors":"Riccardo Mazza","doi":"10.1145/1168149.1168155","DOIUrl":"https://doi.org/10.1145/1168149.1168155","url":null,"abstract":"This paper reports our experience of evaluating an application that uses visualization approaches to support instructors in Web based distance education. The evaluation took place in three stages: a focus group, an experimental study, and a semi-structured interview. In this paper we focus our attention on the focus group, and we will show how this evaluation approach can be very effective in uncovering unexpected problems that cannot be identified with analytic evaluations or controlled experiments.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"61 24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123352622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}