{"title":"使用自动测试进行用户界面评估实验","authors":"Kania Katherina , Dany Eka Saputra","doi":"10.1016/j.procs.2024.10.233","DOIUrl":null,"url":null,"abstract":"<div><div>Learnability is one important aspect of user interaction that measures how long a user needs to familiarize themselves with the software. The evaluation method using expert analysis or user questionnaire cannot fully capture the learnability aspect of a software. Automated testing can record the user performance data and provide an objective evaluation of learnability. However, embedding recording code to conduct automated test can be expensive. This work proposes a novel method of automatic testing to evaluate the learnability of an existing software. By using Figma and Maze apps, a replica of evaluated software is made and injected with users’ performance recording module with much ease. The result of the experiment shows that learnability data can be acquired objectively. In the experiment, the user of evaluated software requires an average learning rate of 3 iterations. While the average completion time is around 2.37 seconds per action for trained respondents and 1.86 seconds for untrained respondents.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"245 ","pages":"Pages 100-108"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Experiment on UI evaluation using automated test\",\"authors\":\"Kania Katherina , Dany Eka Saputra\",\"doi\":\"10.1016/j.procs.2024.10.233\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Learnability is one important aspect of user interaction that measures how long a user needs to familiarize themselves with the software. The evaluation method using expert analysis or user questionnaire cannot fully capture the learnability aspect of a software. Automated testing can record the user performance data and provide an objective evaluation of learnability. However, embedding recording code to conduct automated test can be expensive. This work proposes a novel method of automatic testing to evaluate the learnability of an existing software. By using Figma and Maze apps, a replica of evaluated software is made and injected with users’ performance recording module with much ease. The result of the experiment shows that learnability data can be acquired objectively. In the experiment, the user of evaluated software requires an average learning rate of 3 iterations. While the average completion time is around 2.37 seconds per action for trained respondents and 1.86 seconds for untrained respondents.</div></div>\",\"PeriodicalId\":20465,\"journal\":{\"name\":\"Procedia Computer Science\",\"volume\":\"245 \",\"pages\":\"Pages 100-108\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Procedia Computer Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1877050924030412\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Procedia Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1877050924030412","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learnability is one important aspect of user interaction that measures how long a user needs to familiarize themselves with the software. The evaluation method using expert analysis or user questionnaire cannot fully capture the learnability aspect of a software. Automated testing can record the user performance data and provide an objective evaluation of learnability. However, embedding recording code to conduct automated test can be expensive. This work proposes a novel method of automatic testing to evaluate the learnability of an existing software. By using Figma and Maze apps, a replica of evaluated software is made and injected with users’ performance recording module with much ease. The result of the experiment shows that learnability data can be acquired objectively. In the experiment, the user of evaluated software requires an average learning rate of 3 iterations. While the average completion time is around 2.37 seconds per action for trained respondents and 1.86 seconds for untrained respondents.