{"title":"CrowdStudy:用于网络界面众包评估的通用工具包","authors":"Michael Nebeling, Maximilian Speicher, M. Norrie","doi":"10.1145/2494603.2480303","DOIUrl":null,"url":null,"abstract":"While traditional usability testing methods can be both time consuming and expensive, tools for automated usability evaluation tend to oversimplify the problem by limiting themselves to supporting only certain evaluation criteria, settings, tasks and scenarios. We present CrowdStudy, a general web toolkit that combines support for automated usability testing with crowdsourcing to facilitate large-scale online user testing. CrowdStudy is based on existing crowdsourcing techniques for recruiting workers and guiding them through complex tasks, but implements mechanisms specifically designed for usability studies, allowing testers to control user sampling and conduct evaluations for particular contexts of use. Our toolkit provides support for context-aware data collection and analysis based on an extensible set of metrics, as well as tools for managing, reviewing and analysing any collected data. The paper demonstrates several useful features of CrowdStudy for two different scenarios, and discusses the benefits and tradeoffs of using crowdsourced evaluation.","PeriodicalId":163033,"journal":{"name":"Engineering Interactive Computing System","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"44","resultStr":"{\"title\":\"CrowdStudy: general toolkit for crowdsourced evaluation of web interfaces\",\"authors\":\"Michael Nebeling, Maximilian Speicher, M. Norrie\",\"doi\":\"10.1145/2494603.2480303\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While traditional usability testing methods can be both time consuming and expensive, tools for automated usability evaluation tend to oversimplify the problem by limiting themselves to supporting only certain evaluation criteria, settings, tasks and scenarios. We present CrowdStudy, a general web toolkit that combines support for automated usability testing with crowdsourcing to facilitate large-scale online user testing. CrowdStudy is based on existing crowdsourcing techniques for recruiting workers and guiding them through complex tasks, but implements mechanisms specifically designed for usability studies, allowing testers to control user sampling and conduct evaluations for particular contexts of use. Our toolkit provides support for context-aware data collection and analysis based on an extensible set of metrics, as well as tools for managing, reviewing and analysing any collected data. The paper demonstrates several useful features of CrowdStudy for two different scenarios, and discusses the benefits and tradeoffs of using crowdsourced evaluation.\",\"PeriodicalId\":163033,\"journal\":{\"name\":\"Engineering Interactive Computing System\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-06-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"44\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Interactive Computing System\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2494603.2480303\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Interactive Computing System","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2494603.2480303","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CrowdStudy: general toolkit for crowdsourced evaluation of web interfaces
While traditional usability testing methods can be both time consuming and expensive, tools for automated usability evaluation tend to oversimplify the problem by limiting themselves to supporting only certain evaluation criteria, settings, tasks and scenarios. We present CrowdStudy, a general web toolkit that combines support for automated usability testing with crowdsourcing to facilitate large-scale online user testing. CrowdStudy is based on existing crowdsourcing techniques for recruiting workers and guiding them through complex tasks, but implements mechanisms specifically designed for usability studies, allowing testers to control user sampling and conduct evaluations for particular contexts of use. Our toolkit provides support for context-aware data collection and analysis based on an extensible set of metrics, as well as tools for managing, reviewing and analysing any collected data. The paper demonstrates several useful features of CrowdStudy for two different scenarios, and discusses the benefits and tradeoffs of using crowdsourced evaluation.