Albert Kalim, J. Hayes, Satrio Husodo, Erin Combs, Jared Payne
{"title":"Multi-user Input in Determining Answer Sets (MIDAS)","authors":"Albert Kalim, J. Hayes, Satrio Husodo, Erin Combs, Jared Payne","doi":"10.1109/RE.2018.00070","DOIUrl":null,"url":null,"abstract":"Empirical validation is an important component of sound requirements engineering research. Many researchers develop a gold standard or answer set against which to compare techniques that they also developed in order to calculate common measures such as recall and precision. This poses threats to validity as the researchers developed the gold standard and the technique to be measured against it. To help address this and to help reduce bias, we introduce a prototype of Multi-user Input in Determining Answer Sets (MIDAS), a web-based tool to permit communities of researchers to jointly determine the gold standard for a given research data set. To date, the tool permits community members to add items to the answer set, vote on items in the answer set, comment on items, and view the latest status of community opinion on answer set items. It currently supports traceability data sets and classification data sets.","PeriodicalId":445032,"journal":{"name":"2018 IEEE 26th International Requirements Engineering Conference (RE)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 26th International Requirements Engineering Conference (RE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RE.2018.00070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Empirical validation is an important component of sound requirements engineering research. Many researchers develop a gold standard or answer set against which to compare techniques that they also developed in order to calculate common measures such as recall and precision. This poses threats to validity as the researchers developed the gold standard and the technique to be measured against it. To help address this and to help reduce bias, we introduce a prototype of Multi-user Input in Determining Answer Sets (MIDAS), a web-based tool to permit communities of researchers to jointly determine the gold standard for a given research data set. To date, the tool permits community members to add items to the answer set, vote on items in the answer set, comment on items, and view the latest status of community opinion on answer set items. It currently supports traceability data sets and classification data sets.