MRDQA: A Deep Multimodal Requirement Document Quality Analyzer

M. Ye, Jicheng Cao, Shengyu Cheng, Dong Liu, Shenghai Xu, Jinning He
{"title":"MRDQA: A Deep Multimodal Requirement Document Quality Analyzer","authors":"M. Ye, Jicheng Cao, Shengyu Cheng, Dong Liu, Shenghai Xu, Jinning He","doi":"10.1109/RE51729.2021.00063","DOIUrl":null,"url":null,"abstract":"In the field of requirement document quality assessment, existing methods mainly focused on textual patterns of requirements. Actually, the cognitive process that experts read and qualitatively measure a requirement document is from outward appearance to inner essence. Inspired by this intuition, this paper proposed a Multimodal Requirement Document Quality Analyzer (MRDQA), a neural model which combines the textual content with the visual rendering of requirement documents for quality assessing. MRDQA can capture implicit quality indicators which do not exist in requirement text, such as tables, diagrams, and visual layout. We evaluated MRDQA on the requirement documents collected from ZTE and achieved 81.3% accuracy in classifying their quality into three levels (high, medium, and low). We have successfully applied MRDQA as a pre-filter in ZTE’s requirement review system. It identifies low and medium quality requirements, thereby allows review experts to focus only on high-quality requirements. With this mechanism, the workload can be greatly reduced and the requirement review process can be accelerated.","PeriodicalId":440285,"journal":{"name":"2021 IEEE 29th International Requirements Engineering Conference (RE)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 29th International Requirements Engineering Conference (RE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RE51729.2021.00063","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

In the field of requirement document quality assessment, existing methods mainly focused on textual patterns of requirements. Actually, the cognitive process that experts read and qualitatively measure a requirement document is from outward appearance to inner essence. Inspired by this intuition, this paper proposed a Multimodal Requirement Document Quality Analyzer (MRDQA), a neural model which combines the textual content with the visual rendering of requirement documents for quality assessing. MRDQA can capture implicit quality indicators which do not exist in requirement text, such as tables, diagrams, and visual layout. We evaluated MRDQA on the requirement documents collected from ZTE and achieved 81.3% accuracy in classifying their quality into three levels (high, medium, and low). We have successfully applied MRDQA as a pre-filter in ZTE’s requirement review system. It identifies low and medium quality requirements, thereby allows review experts to focus only on high-quality requirements. With this mechanism, the workload can be greatly reduced and the requirement review process can be accelerated.
MRDQA:深度多模态需求文档质量分析器
在需求文档质量评估领域,现有方法主要关注需求的文本模式。实际上,专家对需求文件的阅读和定性度量的认知过程是一个从外在到内在本质的过程。受这种直觉的启发,本文提出了一种多模态需求文档质量分析器(MRDQA),这是一种将文本内容与需求文档的可视化呈现相结合的神经模型,用于质量评估。MRDQA可以捕获需求文本中不存在的隐式质量指示器,例如表格、图表和可视化布局。我们对从中兴收集的需求文件进行MRDQA评估,并将其质量分为三个级别(高、中、低),准确率达到81.3%。我们已经成功地将MRDQA作为预滤波器应用到中兴通讯的需求审核系统中。它识别低质量和中等质量的需求,从而允许评审专家只关注高质量的需求。使用这种机制,可以大大减少工作量,并且可以加速需求审查过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信