{"title":"G3:用合成样本和内置识别器引导笔画手势设计","authors":"Daniel Martín-Albo, Luis A. Leiva","doi":"10.1145/2957265.2961833","DOIUrl":null,"url":null,"abstract":"Stroke gestures are becoming increasingly important with the ongoing success of touchscreen-capable devices. However, training a high-quality gesture recognizer requires providing a large number of examples to enable good performance on unseen, future data. Furthermore, recruiting participants, data collection and labeling, etc. necessary for achieving this goal are usually time-consuming and expensive. In response to this need, we introduce G3, a mobile-first web application for bootstrapping unistroke, multistroke, or multitouch gestures. The user only has to provide a gesture example once, and G3 will create a kinematic model of that gesture. Then, by introducing local and global perturbations to the model parameters, G3 will generate any number of synthetic human-like samples. In addition, the user can get a gesture recognizer together with the synthesized data. As such, the outcome of G3 can be directly incorporated into production-ready applications.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"G3: bootstrapping stroke gestures design with synthetic samples and built-in recognizers\",\"authors\":\"Daniel Martín-Albo, Luis A. Leiva\",\"doi\":\"10.1145/2957265.2961833\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Stroke gestures are becoming increasingly important with the ongoing success of touchscreen-capable devices. However, training a high-quality gesture recognizer requires providing a large number of examples to enable good performance on unseen, future data. Furthermore, recruiting participants, data collection and labeling, etc. necessary for achieving this goal are usually time-consuming and expensive. In response to this need, we introduce G3, a mobile-first web application for bootstrapping unistroke, multistroke, or multitouch gestures. The user only has to provide a gesture example once, and G3 will create a kinematic model of that gesture. Then, by introducing local and global perturbations to the model parameters, G3 will generate any number of synthetic human-like samples. In addition, the user can get a gesture recognizer together with the synthesized data. As such, the outcome of G3 can be directly incorporated into production-ready applications.\",\"PeriodicalId\":131157,\"journal\":{\"name\":\"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2957265.2961833\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2957265.2961833","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
G3: bootstrapping stroke gestures design with synthetic samples and built-in recognizers
Stroke gestures are becoming increasingly important with the ongoing success of touchscreen-capable devices. However, training a high-quality gesture recognizer requires providing a large number of examples to enable good performance on unseen, future data. Furthermore, recruiting participants, data collection and labeling, etc. necessary for achieving this goal are usually time-consuming and expensive. In response to this need, we introduce G3, a mobile-first web application for bootstrapping unistroke, multistroke, or multitouch gestures. The user only has to provide a gesture example once, and G3 will create a kinematic model of that gesture. Then, by introducing local and global perturbations to the model parameters, G3 will generate any number of synthetic human-like samples. In addition, the user can get a gesture recognizer together with the synthesized data. As such, the outcome of G3 can be directly incorporated into production-ready applications.