{"title":"nlrpBENCH: A Benchmark for Natural Language Requirements Processing","authors":"W. Tichy, Mathias Landhäußer, Sven J. Körner","doi":"10.5445/IR/1000041252","DOIUrl":null,"url":null,"abstract":"We present nlrpBENCH: a new platform and framework to improve soft- \nware engineering research as well as teaching with focus on requirements engineering \nduring the software engineering process. It is available on http://nlrp.ipd. \nkit.edu. \nRecent advances in natural language processing have made it possible to process \ntextual software requirements automatically, for example checking them for flaws or \ntranslating them into software artifacts. This development is particularly fortunate, \nas the majority of requirements is written in unrestricted natural language. However, \nmany of the tools in in this young area of research have been evaluated only on limited \nsets of examples, because there is no accepted benchmark that could be used to assess \nand compare these tools. To improve comparability and thereby accelerate progress, \nwe have begun to assemble nlrpBENCH, a collection of requirements specifications \nmeant both as a challenge for tools and a yardstick for comparison. \nWe have gathered over 50 requirement texts of varying length and difficulty and \norganized them in benchmark sets. At present, there are two task types: model extrac- \ntion (e.g., generating UML models) and text correction (e.g., eliminating ambiguities). \nEach text is accompanied by the expected result and metrics for scoring results. This \npaper describes the composition of the benchmark and the sources. Due to the brevity \nof this paper, we omit example tools comparisons which are also available.","PeriodicalId":176893,"journal":{"name":"Software Engineering & Management","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Engineering & Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5445/IR/1000041252","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
We present nlrpBENCH: a new platform and framework to improve soft-
ware engineering research as well as teaching with focus on requirements engineering
during the software engineering process. It is available on http://nlrp.ipd.
kit.edu.
Recent advances in natural language processing have made it possible to process
textual software requirements automatically, for example checking them for flaws or
translating them into software artifacts. This development is particularly fortunate,
as the majority of requirements is written in unrestricted natural language. However,
many of the tools in in this young area of research have been evaluated only on limited
sets of examples, because there is no accepted benchmark that could be used to assess
and compare these tools. To improve comparability and thereby accelerate progress,
we have begun to assemble nlrpBENCH, a collection of requirements specifications
meant both as a challenge for tools and a yardstick for comparison.
We have gathered over 50 requirement texts of varying length and difficulty and
organized them in benchmark sets. At present, there are two task types: model extrac-
tion (e.g., generating UML models) and text correction (e.g., eliminating ambiguities).
Each text is accompanied by the expected result and metrics for scoring results. This
paper describes the composition of the benchmark and the sources. Due to the brevity
of this paper, we omit example tools comparisons which are also available.