Eduard C. Groen, Sylwia Kopczynska, Marc P. Hauer, Tobias D. Krafft, Jörg Dörr
{"title":"Users — The Hidden Software Product Quality Experts?: A Study on How App Users Report Quality Aspects in Online Reviews","authors":"Eduard C. Groen, Sylwia Kopczynska, Marc P. Hauer, Tobias D. Krafft, Jörg Dörr","doi":"10.1109/RE.2017.73","DOIUrl":null,"url":null,"abstract":"[Context and motivation] Research on eliciting requirements from a large number of online reviews using automated means has focused on functional aspects. Assuring the quality of an app is vital for its success. This is why user feedback concerning quality issues should be considered as well [Question/problem] But to what extent do online reviews of apps address quality characteristics? And how much potential is there to extract such knowledge through automation? [Principal ideas/results] By tagging online reviews, we found that users mainly write about \"usability\" and \"reliability\", but the majority of statements are on a subcharacteristic level, most notably regarding \"operability\", \"adaptability\", \"fault tolerance\", and \"interoperability\". A set of 16 language patterns regarding \"usability\" correctly identified 1,528 statements from a large dataset far more efficiently than our manual analysis of a small subset. [Contribution] We found that statements can especially be derived from online reviews about qualities by which users are directly affected, although with some ambiguity. Language patterns can identify statements about qualities with high precision, though the recall is modest at this time. Nevertheless, our results have shown that online reviews are an unused Big Data source for quality requirements.","PeriodicalId":176958,"journal":{"name":"2017 IEEE 25th International Requirements Engineering Conference (RE)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"58","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 25th International Requirements Engineering Conference (RE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RE.2017.73","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 58
Abstract
[Context and motivation] Research on eliciting requirements from a large number of online reviews using automated means has focused on functional aspects. Assuring the quality of an app is vital for its success. This is why user feedback concerning quality issues should be considered as well [Question/problem] But to what extent do online reviews of apps address quality characteristics? And how much potential is there to extract such knowledge through automation? [Principal ideas/results] By tagging online reviews, we found that users mainly write about "usability" and "reliability", but the majority of statements are on a subcharacteristic level, most notably regarding "operability", "adaptability", "fault tolerance", and "interoperability". A set of 16 language patterns regarding "usability" correctly identified 1,528 statements from a large dataset far more efficiently than our manual analysis of a small subset. [Contribution] We found that statements can especially be derived from online reviews about qualities by which users are directly affected, although with some ambiguity. Language patterns can identify statements about qualities with high precision, though the recall is modest at this time. Nevertheless, our results have shown that online reviews are an unused Big Data source for quality requirements.