人工制品描述/人工制品评估:可复制性的缺点或优点

T. Malik
{"title":"人工制品描述/人工制品评估:可复制性的缺点或优点","authors":"T. Malik","doi":"10.1145/3456287.3465479","DOIUrl":null,"url":null,"abstract":"Several systems research conferences now incorporate an artifact description and artifact evaluation (AD/AE) process as part of the paper submission. Authors of accepted papers optionally submit a plethora of artifacts: documentation, links, tools, code, data, and scripts for independent validation of the claims in their paper. An artifact evaluation committee (AEC) evaluates the artifacts and stamps papers with accepted artifacts, which then receive publisher badges. Does this AD/AE process serve authors and reviewers? Is it scalable for large conferences such as SCxy? Using the last three SCxy Reproducibility Initiatives as the basis, this talk will analyze the benefits and the miseries of the AD/AE process. Several systems research conferences now incorporate an artifact description and artifact evaluation (AD/AE) process as part of the paper submission. Authors of accepted papers optionally submit a plethora of artifacts: documentation, links, tools, code, data, and scripts for independent validation of the claims in their paper. An artifact evaluation committee (AEC) evaluates the artifacts and stamps papers with accepted artifacts, which then receive publisher badges. Does this AD/AE process serve authors and reviewers? Is it scalable for large conferences such as SCxy? Using the last three SCxy Reproducibility Initiatives as the basis, this talk will analyze the benefits and the miseries of the AD/AE process. We will present a data-driven approach, using survey results to analyze technical and human challenges in conducting the AD/AE process. Our method will distinguish studies that benefit from AD, i.e., increased transparency versus areas that benefit from AE. The AD/AE research objects [1] present an interesting set of data management and systems challenges [2,3]. We will look under the hood of the research objects, describe prominent characteristics, and how cloud infrastructures, documented workflows, and reproducible containers [4] ease some of the AD/AE process hand-shakes. Finally, we will present a vision for the resulting curated, reusable research objects---how such research objects are a treasure in themselves for advancing computational reproducibility and making reproducible evaluation practical in the coming years.","PeriodicalId":419516,"journal":{"name":"Proceedings of the 4th International Workshop on Practical Reproducible Evaluation of Computer Systems","volume":"905 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artifact Description/Artifact Evaluation: A Reproducibility Bane or a Boon\",\"authors\":\"T. Malik\",\"doi\":\"10.1145/3456287.3465479\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Several systems research conferences now incorporate an artifact description and artifact evaluation (AD/AE) process as part of the paper submission. Authors of accepted papers optionally submit a plethora of artifacts: documentation, links, tools, code, data, and scripts for independent validation of the claims in their paper. An artifact evaluation committee (AEC) evaluates the artifacts and stamps papers with accepted artifacts, which then receive publisher badges. Does this AD/AE process serve authors and reviewers? Is it scalable for large conferences such as SCxy? Using the last three SCxy Reproducibility Initiatives as the basis, this talk will analyze the benefits and the miseries of the AD/AE process. Several systems research conferences now incorporate an artifact description and artifact evaluation (AD/AE) process as part of the paper submission. Authors of accepted papers optionally submit a plethora of artifacts: documentation, links, tools, code, data, and scripts for independent validation of the claims in their paper. An artifact evaluation committee (AEC) evaluates the artifacts and stamps papers with accepted artifacts, which then receive publisher badges. Does this AD/AE process serve authors and reviewers? Is it scalable for large conferences such as SCxy? Using the last three SCxy Reproducibility Initiatives as the basis, this talk will analyze the benefits and the miseries of the AD/AE process. We will present a data-driven approach, using survey results to analyze technical and human challenges in conducting the AD/AE process. Our method will distinguish studies that benefit from AD, i.e., increased transparency versus areas that benefit from AE. The AD/AE research objects [1] present an interesting set of data management and systems challenges [2,3]. We will look under the hood of the research objects, describe prominent characteristics, and how cloud infrastructures, documented workflows, and reproducible containers [4] ease some of the AD/AE process hand-shakes. Finally, we will present a vision for the resulting curated, reusable research objects---how such research objects are a treasure in themselves for advancing computational reproducibility and making reproducible evaluation practical in the coming years.\",\"PeriodicalId\":419516,\"journal\":{\"name\":\"Proceedings of the 4th International Workshop on Practical Reproducible Evaluation of Computer Systems\",\"volume\":\"905 \",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 4th International Workshop on Practical Reproducible Evaluation of Computer Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3456287.3465479\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th International Workshop on Practical Reproducible Evaluation of Computer Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3456287.3465479","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

一些系统研究会议现在将工件描述和工件评估(AD/AE)过程作为论文提交的一部分。被接受论文的作者可以选择提交大量的工件:文档、链接、工具、代码、数据和脚本,以独立验证其论文中的声明。工件评估委员会(AEC)评估工件,并用已接受的工件在论文上盖章,然后获得出版商徽章。AD/AE流程是否为作者和审稿人服务?对于像SCxy这样的大型会议,它是否具有可扩展性?以最后三个SCxy再现性计划为基础,本演讲将分析AD/AE过程的好处和痛苦。一些系统研究会议现在将工件描述和工件评估(AD/AE)过程作为论文提交的一部分。被接受论文的作者可以选择提交大量的工件:文档、链接、工具、代码、数据和脚本,以独立验证其论文中的声明。工件评估委员会(AEC)评估工件,并用已接受的工件在论文上盖章,然后获得出版商徽章。AD/AE流程是否为作者和审稿人服务?对于像SCxy这样的大型会议,它是否具有可扩展性?以最后三个SCxy再现性计划为基础,本演讲将分析AD/AE过程的好处和痛苦。我们将提出一种数据驱动的方法,使用调查结果来分析进行AD/AE过程中的技术和人员挑战。我们的方法将区分受益于AD的研究,即增加透明度与受益于AE的领域。AD/AE研究对象[1]提出了一组有趣的数据管理和系统挑战[2,3]。我们将深入研究研究对象,描述突出的特征,以及云基础设施、文档工作流和可重复的容器[4]如何简化一些AD/AE过程的握手。最后,我们将提出一个愿景,由此产生的策划,可重用的研究对象-这些研究对象如何成为未来几年推进计算可重复性和使可重复性评估实用的宝藏。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artifact Description/Artifact Evaluation: A Reproducibility Bane or a Boon
Several systems research conferences now incorporate an artifact description and artifact evaluation (AD/AE) process as part of the paper submission. Authors of accepted papers optionally submit a plethora of artifacts: documentation, links, tools, code, data, and scripts for independent validation of the claims in their paper. An artifact evaluation committee (AEC) evaluates the artifacts and stamps papers with accepted artifacts, which then receive publisher badges. Does this AD/AE process serve authors and reviewers? Is it scalable for large conferences such as SCxy? Using the last three SCxy Reproducibility Initiatives as the basis, this talk will analyze the benefits and the miseries of the AD/AE process. Several systems research conferences now incorporate an artifact description and artifact evaluation (AD/AE) process as part of the paper submission. Authors of accepted papers optionally submit a plethora of artifacts: documentation, links, tools, code, data, and scripts for independent validation of the claims in their paper. An artifact evaluation committee (AEC) evaluates the artifacts and stamps papers with accepted artifacts, which then receive publisher badges. Does this AD/AE process serve authors and reviewers? Is it scalable for large conferences such as SCxy? Using the last three SCxy Reproducibility Initiatives as the basis, this talk will analyze the benefits and the miseries of the AD/AE process. We will present a data-driven approach, using survey results to analyze technical and human challenges in conducting the AD/AE process. Our method will distinguish studies that benefit from AD, i.e., increased transparency versus areas that benefit from AE. The AD/AE research objects [1] present an interesting set of data management and systems challenges [2,3]. We will look under the hood of the research objects, describe prominent characteristics, and how cloud infrastructures, documented workflows, and reproducible containers [4] ease some of the AD/AE process hand-shakes. Finally, we will present a vision for the resulting curated, reusable research objects---how such research objects are a treasure in themselves for advancing computational reproducibility and making reproducible evaluation practical in the coming years.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信