Yixuan Zhang, Sara Di Bartolomeo, Fangfang Sheng, H. Jimison, Cody Dunne
{"title":"Evaluating Alignment Approaches in Superimposed Time-Series and Temporal Event-Sequence Visualizations","authors":"Yixuan Zhang, Sara Di Bartolomeo, Fangfang Sheng, H. Jimison, Cody Dunne","doi":"10.1109/VISUAL.2019.8933584","DOIUrl":null,"url":null,"abstract":"Composite temporal event sequence visualizations have included sentinel event alignment techniques to cope with data volume and variety. Prior work has demonstrated the utility of using single-event alignment for understanding the precursor, co-occurring, and aftereffect events surrounding a sentinel event. However, the usefulness of single-event alignment has not been sufficiently evaluated in composite visualizations. Furthermore, recently proposed dual-event alignment techniques have not been empirically evaluated. In this work, we designed tasks around temporal event sequence and timing analysis and conducted a controlled experiment on Amazon Mechanical Turk to examine four sentinel event alignment approaches: no sentinel event alignment (NoAlign), single-event alignment (SingleAlign), dual-event alignment with left justification (DualLeft), and dual-event alignment with stretch justification (DualStretch). Differences between approaches were most pronounced with more rows of data. For understanding intermediate events between two sentinel events, dual-event alignment was the clear winner for correctness—71% vs. 18% for NoAlign and SingleAlign. For understanding the duration between two sentinel events, NoAlign was the clear winner: correctness—88% vs. 36% for DualStretch— completion time—55 seconds vs. 101 seconds for DualLeft—and error—1.5% vs. 8.4% for DualStretch. For understanding precursor and aftereffect events, there was no significant difference among approaches. A free copy of this paper, the evaluation stimuli and data, and source code are available at osf.io/78fs5","PeriodicalId":192801,"journal":{"name":"2019 IEEE Visualization Conference (VIS)","volume":"13 3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Visualization Conference (VIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VISUAL.2019.8933584","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Composite temporal event sequence visualizations have included sentinel event alignment techniques to cope with data volume and variety. Prior work has demonstrated the utility of using single-event alignment for understanding the precursor, co-occurring, and aftereffect events surrounding a sentinel event. However, the usefulness of single-event alignment has not been sufficiently evaluated in composite visualizations. Furthermore, recently proposed dual-event alignment techniques have not been empirically evaluated. In this work, we designed tasks around temporal event sequence and timing analysis and conducted a controlled experiment on Amazon Mechanical Turk to examine four sentinel event alignment approaches: no sentinel event alignment (NoAlign), single-event alignment (SingleAlign), dual-event alignment with left justification (DualLeft), and dual-event alignment with stretch justification (DualStretch). Differences between approaches were most pronounced with more rows of data. For understanding intermediate events between two sentinel events, dual-event alignment was the clear winner for correctness—71% vs. 18% for NoAlign and SingleAlign. For understanding the duration between two sentinel events, NoAlign was the clear winner: correctness—88% vs. 36% for DualStretch— completion time—55 seconds vs. 101 seconds for DualLeft—and error—1.5% vs. 8.4% for DualStretch. For understanding precursor and aftereffect events, there was no significant difference among approaches. A free copy of this paper, the evaluation stimuli and data, and source code are available at osf.io/78fs5