Luis A. Leiva, Daniel Martín-Albo, Radu-Daniel Vatavu
{"title":"GATO","authors":"Luis A. Leiva, Daniel Martín-Albo, Radu-Daniel Vatavu","doi":"10.1145/3229434.3229478","DOIUrl":null,"url":null,"abstract":"We introduce GATO, a human performance analysis technique grounded in the Kinematic Theory that delivers accurate predictions for the expected user production time of stroke gestures of all kinds: unistrokes, multistrokes, multitouch, or combinations thereof. Our experimental results obtained on several public datasets (82 distinct gesture types, 123 participants, ≈36k gesture samples) show that GATO predicts user-independent gesture production times that correlate rs > .9 with groundtruth, while delivering an average relative error of less than 10% with respect to actual measured times. With its accurate estimations of users' a priori time performance with stroke gesture input, GATO will help researchers to understand better users' gesture articulation patterns on touchscreen devices of all kinds. GATO will also benefit practitioners to inform highly effective gesture set designs.","PeriodicalId":344738,"journal":{"name":"Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3229434.3229478","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
We introduce GATO, a human performance analysis technique grounded in the Kinematic Theory that delivers accurate predictions for the expected user production time of stroke gestures of all kinds: unistrokes, multistrokes, multitouch, or combinations thereof. Our experimental results obtained on several public datasets (82 distinct gesture types, 123 participants, ≈36k gesture samples) show that GATO predicts user-independent gesture production times that correlate rs > .9 with groundtruth, while delivering an average relative error of less than 10% with respect to actual measured times. With its accurate estimations of users' a priori time performance with stroke gesture input, GATO will help researchers to understand better users' gesture articulation patterns on touchscreen devices of all kinds. GATO will also benefit practitioners to inform highly effective gesture set designs.