G3:用合成样本和内置识别器引导笔画手势设计

Daniel Martín-Albo, Luis A. Leiva
{"title":"G3:用合成样本和内置识别器引导笔画手势设计","authors":"Daniel Martín-Albo, Luis A. Leiva","doi":"10.1145/2957265.2961833","DOIUrl":null,"url":null,"abstract":"Stroke gestures are becoming increasingly important with the ongoing success of touchscreen-capable devices. However, training a high-quality gesture recognizer requires providing a large number of examples to enable good performance on unseen, future data. Furthermore, recruiting participants, data collection and labeling, etc. necessary for achieving this goal are usually time-consuming and expensive. In response to this need, we introduce G3, a mobile-first web application for bootstrapping unistroke, multistroke, or multitouch gestures. The user only has to provide a gesture example once, and G3 will create a kinematic model of that gesture. Then, by introducing local and global perturbations to the model parameters, G3 will generate any number of synthetic human-like samples. In addition, the user can get a gesture recognizer together with the synthesized data. As such, the outcome of G3 can be directly incorporated into production-ready applications.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"G3: bootstrapping stroke gestures design with synthetic samples and built-in recognizers\",\"authors\":\"Daniel Martín-Albo, Luis A. Leiva\",\"doi\":\"10.1145/2957265.2961833\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Stroke gestures are becoming increasingly important with the ongoing success of touchscreen-capable devices. However, training a high-quality gesture recognizer requires providing a large number of examples to enable good performance on unseen, future data. Furthermore, recruiting participants, data collection and labeling, etc. necessary for achieving this goal are usually time-consuming and expensive. In response to this need, we introduce G3, a mobile-first web application for bootstrapping unistroke, multistroke, or multitouch gestures. The user only has to provide a gesture example once, and G3 will create a kinematic model of that gesture. Then, by introducing local and global perturbations to the model parameters, G3 will generate any number of synthetic human-like samples. In addition, the user can get a gesture recognizer together with the synthesized data. As such, the outcome of G3 can be directly incorporated into production-ready applications.\",\"PeriodicalId\":131157,\"journal\":{\"name\":\"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2957265.2961833\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2957265.2961833","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

随着触屏设备的不断成功,触控手势变得越来越重要。然而,训练一个高质量的手势识别器需要提供大量的示例,以便在未知的、未来的数据上有良好的表现。此外,为实现这一目标而进行的招募参与者、数据收集和标记等工作通常既耗时又昂贵。为了满足这种需求,我们推出了G3,这是一个移动优先的web应用程序,用于引导单笔、多笔或多点触摸手势。用户只需要提供一次手势示例,G3就会创建该手势的运动学模型。然后,通过对模型参数引入局部和全局扰动,G3将生成任意数量的合成类人样本。此外,用户还可以在合成数据的基础上得到一个手势识别器。因此,G3的结果可以直接合并到生产就绪的应用程序中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
G3: bootstrapping stroke gestures design with synthetic samples and built-in recognizers
Stroke gestures are becoming increasingly important with the ongoing success of touchscreen-capable devices. However, training a high-quality gesture recognizer requires providing a large number of examples to enable good performance on unseen, future data. Furthermore, recruiting participants, data collection and labeling, etc. necessary for achieving this goal are usually time-consuming and expensive. In response to this need, we introduce G3, a mobile-first web application for bootstrapping unistroke, multistroke, or multitouch gestures. The user only has to provide a gesture example once, and G3 will create a kinematic model of that gesture. Then, by introducing local and global perturbations to the model parameters, G3 will generate any number of synthetic human-like samples. In addition, the user can get a gesture recognizer together with the synthesized data. As such, the outcome of G3 can be directly incorporated into production-ready applications.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信