Competitive testing: issues and methodology

K. Greenwood, Kelly Braun, S. Czarkowski
{"title":"Competitive testing: issues and methodology","authors":"K. Greenwood, Kelly Braun, S. Czarkowski","doi":"10.1145/286498.286834","DOIUrl":null,"url":null,"abstract":"The purpose of this Special Interest Group is to provide a forum for Usability professionals with an interest in performing Competitive Tests to discuss issues and exchange advice. There is very little information regarding appropriate methodology or guidelines for performing Competitive Tests published on this topic. This Special Interest Group will provide an opportunity for individuals whose work involves the performance or review of competitive tests to share tips and techniques and will serve as an avenue for those interested in competitive testing to gain insight on the differences between competitive and diagnostic usability tests. In addition, it will allow the members of CHI to discuss the option of adopting standardized methodologies and metrics for performing competitive usability tests. The organizers of this Special Interest Group are three Usability Professionals with experience planning and conducting Competitive Tests. The majority of the meeting will be devoted to an informal discussion of the Competitive Testing experiences of both the organizers and other meeting participants. Issues that we intend to discuss include: goals of competitive testing, how competitive testing differs from other forms of usability testing, legal and licensing issues, who performs the various tasks, and when to hire a consultant. Emphasis will be placed upon identifying useful techniques, potential problems and solutions to these problems. The final portion of the meeting will be reserved for a brainstorming session on a set of basic and commonly agreed upon techniques and standards for organizing, designing, and performing competitive tests and reporting test results. Guidelines for competitive usability testing are necessary to assist individuals who are unfamiliar with competitive testing requirements, and as a resource to ensure lack of bias when designing and running a Competitive Test. It is also hoped that the creation of this rough set of guidelines would encourage further discussion of the creation of a standardized Competitive Testing Methodology that members of CHI and other Human Factors Professionals could refer to. The guidelines produced by this brainstorming session will be distributed to SIG participants after CHI via E-mail or regular mail. Used with permission of Oracle Corporation and American Institutes for Research","PeriodicalId":153619,"journal":{"name":"CHI 98 Conference Summary on Human Factors in Computing Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CHI 98 Conference Summary on Human Factors in Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/286498.286834","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The purpose of this Special Interest Group is to provide a forum for Usability professionals with an interest in performing Competitive Tests to discuss issues and exchange advice. There is very little information regarding appropriate methodology or guidelines for performing Competitive Tests published on this topic. This Special Interest Group will provide an opportunity for individuals whose work involves the performance or review of competitive tests to share tips and techniques and will serve as an avenue for those interested in competitive testing to gain insight on the differences between competitive and diagnostic usability tests. In addition, it will allow the members of CHI to discuss the option of adopting standardized methodologies and metrics for performing competitive usability tests. The organizers of this Special Interest Group are three Usability Professionals with experience planning and conducting Competitive Tests. The majority of the meeting will be devoted to an informal discussion of the Competitive Testing experiences of both the organizers and other meeting participants. Issues that we intend to discuss include: goals of competitive testing, how competitive testing differs from other forms of usability testing, legal and licensing issues, who performs the various tasks, and when to hire a consultant. Emphasis will be placed upon identifying useful techniques, potential problems and solutions to these problems. The final portion of the meeting will be reserved for a brainstorming session on a set of basic and commonly agreed upon techniques and standards for organizing, designing, and performing competitive tests and reporting test results. Guidelines for competitive usability testing are necessary to assist individuals who are unfamiliar with competitive testing requirements, and as a resource to ensure lack of bias when designing and running a Competitive Test. It is also hoped that the creation of this rough set of guidelines would encourage further discussion of the creation of a standardized Competitive Testing Methodology that members of CHI and other Human Factors Professionals could refer to. The guidelines produced by this brainstorming session will be distributed to SIG participants after CHI via E-mail or regular mail. Used with permission of Oracle Corporation and American Institutes for Research
竞争性测试:问题和方法
这个特别兴趣小组的目的是为对执行竞争性测试感兴趣的可用性专业人员提供一个论坛,讨论问题并交换建议。就这一主题发表的关于进行竞争性测试的适当方法或准则的信息很少。这个特别兴趣小组将为其工作涉及竞争性测试的执行或审查的个人提供一个交流技巧和技术的机会,并将成为对竞争性测试感兴趣的人了解竞争性可用性测试和诊断性可用性测试之间差异的途径。此外,它将允许CHI成员讨论采用标准化方法和度量来执行竞争性可用性测试的选择。这个特别兴趣小组的组织者是三位具有规划和执行竞争性测试经验的可用性专业人士。会议的大部分时间将用于对组织者和其他会议参与者的竞争性测试经验进行非正式讨论。我们打算讨论的问题包括:竞争性测试的目标,竞争性测试与其他形式的可用性测试的区别,法律和许可问题,谁执行各种任务,以及何时聘请顾问。重点将放在确定有用的技术,潜在的问题和解决这些问题。会议的最后一部分将留给头脑风暴会议,讨论组织、设计和执行竞争性测试和报告测试结果的一套基本的和普遍同意的技术和标准。竞争性可用性测试的指导方针对于那些不熟悉竞争性测试需求的人来说是必要的,并且作为一种资源来确保在设计和运行竞争性测试时不存在偏见。我们还希望,这套粗略的指导方针的制定将鼓励进一步讨论制定标准化的竞争性测试方法,供CHI成员和其他人为因素专业人员参考。由这次头脑风暴会议产生的指导方针将在智汇之后通过电子邮件或普通邮件分发给小组参与者。经甲骨文公司和美国研究所许可使用
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信