Users — The Hidden Software Product Quality Experts?: A Study on How App Users Report Quality Aspects in Online Reviews

Eduard C. Groen, Sylwia Kopczynska, Marc P. Hauer, Tobias D. Krafft, Jörg Dörr
{"title":"Users — The Hidden Software Product Quality Experts?: A Study on How App Users Report Quality Aspects in Online Reviews","authors":"Eduard C. Groen, Sylwia Kopczynska, Marc P. Hauer, Tobias D. Krafft, Jörg Dörr","doi":"10.1109/RE.2017.73","DOIUrl":null,"url":null,"abstract":"[Context and motivation] Research on eliciting requirements from a large number of online reviews using automated means has focused on functional aspects. Assuring the quality of an app is vital for its success. This is why user feedback concerning quality issues should be considered as well [Question/problem] But to what extent do online reviews of apps address quality characteristics? And how much potential is there to extract such knowledge through automation? [Principal ideas/results] By tagging online reviews, we found that users mainly write about \"usability\" and \"reliability\", but the majority of statements are on a subcharacteristic level, most notably regarding \"operability\", \"adaptability\", \"fault tolerance\", and \"interoperability\". A set of 16 language patterns regarding \"usability\" correctly identified 1,528 statements from a large dataset far more efficiently than our manual analysis of a small subset. [Contribution] We found that statements can especially be derived from online reviews about qualities by which users are directly affected, although with some ambiguity. Language patterns can identify statements about qualities with high precision, though the recall is modest at this time. Nevertheless, our results have shown that online reviews are an unused Big Data source for quality requirements.","PeriodicalId":176958,"journal":{"name":"2017 IEEE 25th International Requirements Engineering Conference (RE)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"58","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 25th International Requirements Engineering Conference (RE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RE.2017.73","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 58

Abstract

[Context and motivation] Research on eliciting requirements from a large number of online reviews using automated means has focused on functional aspects. Assuring the quality of an app is vital for its success. This is why user feedback concerning quality issues should be considered as well [Question/problem] But to what extent do online reviews of apps address quality characteristics? And how much potential is there to extract such knowledge through automation? [Principal ideas/results] By tagging online reviews, we found that users mainly write about "usability" and "reliability", but the majority of statements are on a subcharacteristic level, most notably regarding "operability", "adaptability", "fault tolerance", and "interoperability". A set of 16 language patterns regarding "usability" correctly identified 1,528 statements from a large dataset far more efficiently than our manual analysis of a small subset. [Contribution] We found that statements can especially be derived from online reviews about qualities by which users are directly affected, although with some ambiguity. Language patterns can identify statements about qualities with high precision, though the recall is modest at this time. Nevertheless, our results have shown that online reviews are an unused Big Data source for quality requirements.
用户——隐藏的软件产品质量专家?: App用户在线评价中质量方面的反馈研究
【背景与动机】使用自动化手段从大量在线评论中提取需求的研究主要集中在功能方面。确保应用的质量对其成功至关重要。这也是为什么我们应该考虑用户关于质量问题的反馈(问题/问题)。但是在线评论在多大程度上能够解决应用的质量特征?通过自动化提取这些知识的潜力有多大?[主要观点/结果]通过标记在线评论,我们发现用户主要写“可用性”和“可靠性”,但大多数陈述是在子特征层面上,最明显的是关于“可操作性”、“适应性”、“容错性”和“互操作性”。一组关于“可用性”的16种语言模式正确地从一个大数据集中识别了1528个语句,比我们手工分析一个小子集的效率要高得多。[贡献]我们发现,尽管存在一些歧义,但关于用户直接受到影响的品质的在线评论尤其可以得出这些陈述。语言模式可以非常精确地识别关于品质的陈述,尽管此时的回忆是有限的。然而,我们的研究结果表明,在线评论是一个未被使用的质量要求大数据源。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信