协作式web应用程序的实证检验

Christopher Stewart, Matthew Leventi, Kai Shen
{"title":"协作式web应用程序的实证检验","authors":"Christopher Stewart, Matthew Leventi, Kai Shen","doi":"10.1109/IISWC.2008.4636094","DOIUrl":null,"url":null,"abstract":"Online instructional applications, social networking sites, Wiki-based Web sites, and other emerging Web applications that rely on end users for the generation of web content are increasingly popular. However, these collaborative Web applications are still absent from the benchmark suites commonly used in the evaluation of online systems. This paper argues that collaborative Web applications are unlike traditional online benchmarks, and therefore warrant a new class of benchmarks. Specifically, request behaviors in collaborative Web applications are determined by contributions from end users, which leads to qualitatively more diverse server-side resource requirements and execution patterns compared to traditional online benchmarks. Our arguments stem from an empirical examination of WeBWorK-a widely-used collaborative Web application that allows teachers to post math or physics problems for their students to solve online. Compared to traditional online benchmarks (like TPC-C, SPECweb, and RUBiS), WeBWorK requests are harder to cluster according to their resource consumption, and they follow less regular patterns. Further, we demonstrate that the use of a WeBWorK-style benchmark would probably have led to different results in some recent research studies concerning request classification from event chains and type-based resource usage prediction.","PeriodicalId":447179,"journal":{"name":"2008 IEEE International Symposium on Workload Characterization","volume":"220 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"24","resultStr":"{\"title\":\"Empirical examination of a collaborative web application\",\"authors\":\"Christopher Stewart, Matthew Leventi, Kai Shen\",\"doi\":\"10.1109/IISWC.2008.4636094\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Online instructional applications, social networking sites, Wiki-based Web sites, and other emerging Web applications that rely on end users for the generation of web content are increasingly popular. However, these collaborative Web applications are still absent from the benchmark suites commonly used in the evaluation of online systems. This paper argues that collaborative Web applications are unlike traditional online benchmarks, and therefore warrant a new class of benchmarks. Specifically, request behaviors in collaborative Web applications are determined by contributions from end users, which leads to qualitatively more diverse server-side resource requirements and execution patterns compared to traditional online benchmarks. Our arguments stem from an empirical examination of WeBWorK-a widely-used collaborative Web application that allows teachers to post math or physics problems for their students to solve online. Compared to traditional online benchmarks (like TPC-C, SPECweb, and RUBiS), WeBWorK requests are harder to cluster according to their resource consumption, and they follow less regular patterns. Further, we demonstrate that the use of a WeBWorK-style benchmark would probably have led to different results in some recent research studies concerning request classification from event chains and type-based resource usage prediction.\",\"PeriodicalId\":447179,\"journal\":{\"name\":\"2008 IEEE International Symposium on Workload Characterization\",\"volume\":\"220 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"24\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 IEEE International Symposium on Workload Characterization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IISWC.2008.4636094\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 IEEE International Symposium on Workload Characterization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISWC.2008.4636094","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 24

摘要

在线教学应用程序、社会网络站点、基于wiki的Web站点和其他依赖最终用户生成Web内容的新兴Web应用程序越来越流行。然而,这些协作Web应用程序仍然没有出现在在线系统评估中常用的基准套件中。本文认为协作Web应用程序与传统的在线基准测试不同,因此需要一类新的基准测试。具体来说,协作Web应用程序中的请求行为是由最终用户的贡献决定的,与传统的在线基准测试相比,这会导致服务器端资源需求和执行模式在质量上更加多样化。我们的论点源于对webwork的实证研究,webwork是一个广泛使用的协作网络应用程序,允许教师将数学或物理问题发布给学生,让他们在线解决。与传统的在线基准测试(如TPC-C、SPECweb和RUBiS)相比,WeBWorK请求更难根据它们的资源消耗进行集群,并且它们遵循较少的规则模式。此外,我们还证明,在最近的一些关于从事件链中进行请求分类和基于类型的资源使用预测的研究中,使用webwork风格的基准可能会导致不同的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Empirical examination of a collaborative web application
Online instructional applications, social networking sites, Wiki-based Web sites, and other emerging Web applications that rely on end users for the generation of web content are increasingly popular. However, these collaborative Web applications are still absent from the benchmark suites commonly used in the evaluation of online systems. This paper argues that collaborative Web applications are unlike traditional online benchmarks, and therefore warrant a new class of benchmarks. Specifically, request behaviors in collaborative Web applications are determined by contributions from end users, which leads to qualitatively more diverse server-side resource requirements and execution patterns compared to traditional online benchmarks. Our arguments stem from an empirical examination of WeBWorK-a widely-used collaborative Web application that allows teachers to post math or physics problems for their students to solve online. Compared to traditional online benchmarks (like TPC-C, SPECweb, and RUBiS), WeBWorK requests are harder to cluster according to their resource consumption, and they follow less regular patterns. Further, we demonstrate that the use of a WeBWorK-style benchmark would probably have led to different results in some recent research studies concerning request classification from event chains and type-based resource usage prediction.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信