Cheng-Yao Wang, Wei-Chen Chu, Hou-Ren Chen, Chun-Yen Hsu, Mike Y. Chen
{"title":"EverTutor:通过用户演示自动创建智能手机上的交互式指导教程","authors":"Cheng-Yao Wang, Wei-Chen Chu, Hou-Ren Chen, Chun-Yen Hsu, Mike Y. Chen","doi":"10.1145/2556288.2557407","DOIUrl":null,"url":null,"abstract":"We present EverTutor, a system that automatically generates interactive tutorials on smartphone from user demonstration. For tutorial authors, it simplifies the tutorial creation. For tutorial users, it provides contextual step-by-step guidance and avoids the frequent context switching between tutorials and users' primary tasks. In order to generate the tutorials automatically, EverTutor records low-level touch events to detect gestures and identify on-screen targets. When a tutorial is browsed, the system uses vision-based techniques to locate the target regions and overlays the corresponding input prompt contextually. It also identifies the correctness of users' interaction to guide the users step by step. We conducted a 6-person user study for creating tutorials and a 12-person user study for browsing tutorials, and we compared EverTutor's interactive tutorials to static and video ones. Study results show that creating tutorials by EverTutor is simpler and faster than producing static and video tutorials. Also, when using the tutorials, the task completion time for interactive tutorials were 3-6 times faster than static and video tutorials regardless of age group. In terms of user preference, 83% of the users chose interactive type as the preferred tutorial type and rated it easiest to follow and easiest to understand.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"30","resultStr":"{\"title\":\"EverTutor: automatically creating interactive guided tutorials on smartphones by user demonstration\",\"authors\":\"Cheng-Yao Wang, Wei-Chen Chu, Hou-Ren Chen, Chun-Yen Hsu, Mike Y. Chen\",\"doi\":\"10.1145/2556288.2557407\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present EverTutor, a system that automatically generates interactive tutorials on smartphone from user demonstration. For tutorial authors, it simplifies the tutorial creation. For tutorial users, it provides contextual step-by-step guidance and avoids the frequent context switching between tutorials and users' primary tasks. In order to generate the tutorials automatically, EverTutor records low-level touch events to detect gestures and identify on-screen targets. When a tutorial is browsed, the system uses vision-based techniques to locate the target regions and overlays the corresponding input prompt contextually. It also identifies the correctness of users' interaction to guide the users step by step. We conducted a 6-person user study for creating tutorials and a 12-person user study for browsing tutorials, and we compared EverTutor's interactive tutorials to static and video ones. Study results show that creating tutorials by EverTutor is simpler and faster than producing static and video tutorials. Also, when using the tutorials, the task completion time for interactive tutorials were 3-6 times faster than static and video tutorials regardless of age group. In terms of user preference, 83% of the users chose interactive type as the preferred tutorial type and rated it easiest to follow and easiest to understand.\",\"PeriodicalId\":20599,\"journal\":{\"name\":\"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-04-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"30\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2556288.2557407\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2556288.2557407","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
EverTutor: automatically creating interactive guided tutorials on smartphones by user demonstration
We present EverTutor, a system that automatically generates interactive tutorials on smartphone from user demonstration. For tutorial authors, it simplifies the tutorial creation. For tutorial users, it provides contextual step-by-step guidance and avoids the frequent context switching between tutorials and users' primary tasks. In order to generate the tutorials automatically, EverTutor records low-level touch events to detect gestures and identify on-screen targets. When a tutorial is browsed, the system uses vision-based techniques to locate the target regions and overlays the corresponding input prompt contextually. It also identifies the correctness of users' interaction to guide the users step by step. We conducted a 6-person user study for creating tutorials and a 12-person user study for browsing tutorials, and we compared EverTutor's interactive tutorials to static and video ones. Study results show that creating tutorials by EverTutor is simpler and faster than producing static and video tutorials. Also, when using the tutorials, the task completion time for interactive tutorials were 3-6 times faster than static and video tutorials regardless of age group. In terms of user preference, 83% of the users chose interactive type as the preferred tutorial type and rated it easiest to follow and easiest to understand.