{"title":"More bang for your research buck: toward recommender systems for visual analytics","authors":"L. Blaha, Dustin L. Arendt, Fairul Mohd-Zaid","doi":"10.1145/2669557.2669566","DOIUrl":null,"url":null,"abstract":"We propose a set of common sense steps required to develop a recommender system for visual analytics. Such a system is an essential way to get additional mileage out of costly user studies, which are typically archived post publication. Crucially, we propose conducting user studies in a manner that allows machine learning techniques to elucidate relationships between experimental data (i.e., user performance) and metrics about the data being visualized and candidate visual representations. We execute a case study within our framework to extract simple rules of thumb that relate different data metrics and visualization characteristics to patterns of user errors on several network analysis tasks. Our case study suggests a research agenda supporting the development of general, robust visualization recommender systems.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2669557.2669566","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
More bang for your research buck: toward recommender systems for visual analytics
We propose a set of common sense steps required to develop a recommender system for visual analytics. Such a system is an essential way to get additional mileage out of costly user studies, which are typically archived post publication. Crucially, we propose conducting user studies in a manner that allows machine learning techniques to elucidate relationships between experimental data (i.e., user performance) and metrics about the data being visualized and candidate visual representations. We execute a case study within our framework to extract simple rules of thumb that relate different data metrics and visualization characteristics to patterns of user errors on several network analysis tasks. Our case study suggests a research agenda supporting the development of general, robust visualization recommender systems.