The role of evaluation in AI and law: an examination of its different forms in the AI and law journal

Jack G. Conrad, John Zeleznikow
{"title":"The role of evaluation in AI and law: an examination of its different forms in the AI and law journal","authors":"Jack G. Conrad, John Zeleznikow","doi":"10.1145/2746090.2746116","DOIUrl":null,"url":null,"abstract":"This paper explores the presence and forms of evaluation in articles published in the journal Artificial Intelligence and Law for the ten-year period from 2005 through 2014. It represents a meta-level study of some the most significant works produced by the AI and Law community, in this case nearly 140 research articles published in the AI and Law journal. It also compares its findings to previous work conducted on evaluation appearing in the Proceedings of the International Conference on Artificial Intelligence and Law (ICAIL). In addition, the paper highlights works harnessing performance evaluation as one of their chief scientific tools and the means by which they use it. It extends the argument for why evaluation is essential in formal Artificial Intelligence and Law reports such as those in the journal. As in the case of two earlier works on the topic, it pursues answers to the questions: how good is the system, algorithm or proposal?, how reliable is the approach or technique?, and, ultimately, does the method work? The paper investigates the role of performance evaluation in scientific research reports, underscoring the argument that a performance-based 'ethic' signifies a level of maturity and scientific rigor within a community. In addition, the work examines recent publications that address the same critical issue within the broader field of Artificial Intelligence.","PeriodicalId":309125,"journal":{"name":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2746090.2746116","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

Abstract

This paper explores the presence and forms of evaluation in articles published in the journal Artificial Intelligence and Law for the ten-year period from 2005 through 2014. It represents a meta-level study of some the most significant works produced by the AI and Law community, in this case nearly 140 research articles published in the AI and Law journal. It also compares its findings to previous work conducted on evaluation appearing in the Proceedings of the International Conference on Artificial Intelligence and Law (ICAIL). In addition, the paper highlights works harnessing performance evaluation as one of their chief scientific tools and the means by which they use it. It extends the argument for why evaluation is essential in formal Artificial Intelligence and Law reports such as those in the journal. As in the case of two earlier works on the topic, it pursues answers to the questions: how good is the system, algorithm or proposal?, how reliable is the approach or technique?, and, ultimately, does the method work? The paper investigates the role of performance evaluation in scientific research reports, underscoring the argument that a performance-based 'ethic' signifies a level of maturity and scientific rigor within a community. In addition, the work examines recent publications that address the same critical issue within the broader field of Artificial Intelligence.
评价在人工智能与法律中的作用:对其在人工智能与法律期刊上不同形式的考察
本文对《人工智能与法律》(Artificial Intelligence and Law)杂志2005年至2014年十年间发表的文章中评估的存在和形式进行了探讨。它代表了对人工智能和法律界产生的一些最重要作品的元层面研究,在这种情况下,在人工智能和法律杂志上发表的近140篇研究文章。它还将其发现与之前发表在《人工智能与法律国际会议论文集》(ICAIL)上的评估工作进行了比较。此外,本文还重点介绍了利用绩效评估作为其主要科学工具之一的工作及其使用方法。它扩展了为什么评估在正式的人工智能和法律报告(如期刊上的那些报告)中至关重要的论点。就像之前关于这个主题的两部作品一样,它追求的是以下问题的答案:系统、算法或提案有多好?方法或技术的可靠性如何?最后,这个方法是否有效?这篇论文调查了绩效评估在科研报告中的作用,强调了一个基于绩效的“伦理”标志着一个社区的成熟程度和科学严谨性的论点。此外,该工作还审查了最近的出版物,这些出版物在更广泛的人工智能领域内解决了同样的关键问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信