Stylianos Gkikas, C. Volioti, Nikolaos Nikolaidis, Apostolos Ampatzoglou, A. Chatzigeorgiou, Ignatios S. Deligiannis
{"title":"Metrics for Assessing Gamers’ Satisfaction: Exploring the Graphics Factor","authors":"Stylianos Gkikas, C. Volioti, Nikolaos Nikolaidis, Apostolos Ampatzoglou, A. Chatzigeorgiou, Ignatios S. Deligiannis","doi":"10.1109/ASEW52652.2021.00027","DOIUrl":"https://doi.org/10.1109/ASEW52652.2021.00027","url":null,"abstract":"Requirements’ engineering (elicitation and documentation) is considered to be one of the most crucial phases of the software development process. More specifically, many products fail to reach the market or to capture a respectable share of it, due to problems derived during requirements engineering. In any game the main requirement is expected to be entertainment: i.e., guaranteeing that the user has fun while playing the game. The experience of the user, while playing any game, is highly correlated to non-functional requirements, such as game speed, game graphics and scenario. However, in the majority of the cases such non-functional requirements are vague, since there are no success indicators (metrics) or target values that can (to some extent) guarantee user satisfaction. In this paper, we propose a process that can be used for enhancing game requirements’ engineering, by specifying nonfunctional requirements along with metrics, based on user satisfaction factors. The employed user satisfaction factors, are reused from previous work (i.e., a survey with regular gamers), whereas in this work we identify game characteristics that are relevant to a specific user satisfaction factor (namely: graphics) and we propose and validate metrics for their automated quantification from game artifacts.","PeriodicalId":349977,"journal":{"name":"2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW)","volume":"49 23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127098141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extracting Software Change Requests from Mobile App Reviews","authors":"Muhammad Nadeem, Khurram Shahzad, N. Majeed","doi":"10.1109/ASEW52652.2021.00047","DOIUrl":"https://doi.org/10.1109/ASEW52652.2021.00047","url":null,"abstract":"The use of mobile apps is increasing rapidly. These apps have thousands of reviews which are widely acknowledged as a valuable resource for the community involved in the development of mobile apps. In this study, we contend that these reviews can be used to generate software change request document for improving mobile apps. A pre-requisite for generating such a document is the identification of Software Change Requests (SCR) from the user reviews. However, the manual processing of these large number of reviews to identify SCRs is a resource intensive task. However, most of the existing studies have focused on the identification of bugs. Whereas, a few studies have been conducted to identify change requests and its localization from mobile apps review, which substantially different from extracting SCR. To that end, we have scrapped review of seven Mobile Apps and developed a dataset that can be used for training of machine learning techniques for the automatic identification of SCRs. A key feature of the approach is that we have documented the annotation guidelines that are used to distinguish between SCR and non-SCR sentences. These guidelines can be used to enhance the developed dataset, as well as to develop new datasets. As another contribution, we have evaluated the effectiveness of five supervised learning techniques for their ability to identify SCR sentences from user reviews. The study shows that Logistic Regression achieved a nearly perfect F1 score of 0.97 for extracting SCR from textual reviews.","PeriodicalId":349977,"journal":{"name":"2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127473863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Helen Dong, Shurui Zhou, Jin L. C. Guo, Christian Kästner
{"title":"Splitting, Renaming, Removing: A Study of Common Cleaning Activities in Jupyter Notebooks","authors":"Helen Dong, Shurui Zhou, Jin L. C. Guo, Christian Kästner","doi":"10.1109/ASEW52652.2021.00032","DOIUrl":"https://doi.org/10.1109/ASEW52652.2021.00032","url":null,"abstract":"Data scientists commonly use computational notebooks because they provide a good environment for testing multiple models. However, once the scientist completes the code and finds the ideal model, he or she will have to dedicate time to clean up the code in order for others to easily understand it. In this paper, we perform a qualitative study on how scientists clean their code in hopes of being able to suggest a tool to automate this process. Our end goal is for tool builders to address possible gaps and provide additional aid to data scientists, who then can focus more on their actual work rather than the routine and tedious cleaning work. By sampling notebooks from GitHub and analyzing changes between subsequent commits, we identified common cleaning activities, such as changes to markdown (e.g., adding headers sections or descriptions) or comments (both deleting dead code and adding descriptions) as well as reordering cells. We also find that common cleaning activities differ depending on the intended purpose of the notebook. Our results provide a valuable foundation for tool builders and notebook users, as many identified cleaning activities could benefit from codification of best practices and dedicated tool support, possibly tailored depending on intended use.","PeriodicalId":349977,"journal":{"name":"2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134009948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification of UML Diagrams to Support Software Engineering Education","authors":"J. F. Tavares, Yandre M. G. Costa, T. Colanzi","doi":"10.1109/ASEW52652.2021.00030","DOIUrl":"https://doi.org/10.1109/ASEW52652.2021.00030","url":null,"abstract":"There is a huge necessity for tools that implement accessibility in Software Engineering (SE) education. The use of diagrams to teach software development is a very common practice, and there are a lot of UML diagrams represented as images in didactic materials that need an accessible version for visually impaired or blind students. Machine learning techniques, such as deep learning, can be used to automate this task. The practical application of deep learning in many classification problems in the context of SE is problematic due to the large volumes of labeled data required for training. Transfer learning techniques can help in this type of task by taking advantage of pre-trained models based on Convolutional Neural Networks (CNN), so that better results may be achieved even with few images. In this work, we applied transfer learning and data augmentation for UML diagrams classification on a dataset specially created for the development of this work, containing six types of UML diagrams. The dataset was also made available as a contribution of this work. We experimented three widely-known CNN architectures: VGG16, RestNet50, and InceptionV3. The results demonstrated that the use of transfer learning contributes for achieving good results even using scarce data. However, there is still a room for improvement regarding the successful classification of the UML diagrams addressed in this work.","PeriodicalId":349977,"journal":{"name":"2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133577906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tiezhu Sun, N. Daoudi, Kevin Allix, Tegawendé F. Bissyandé
{"title":"Android Malware Detection: Looking beyond Dalvik Bytecode","authors":"Tiezhu Sun, N. Daoudi, Kevin Allix, Tegawendé F. Bissyandé","doi":"10.1109/ASEW52652.2021.00019","DOIUrl":"https://doi.org/10.1109/ASEW52652.2021.00019","url":null,"abstract":"Machine learning has been widely employed in the literature of malware detection because it is adapted to the need for scalability in vetting large scale samples of Android. Feature engineering has therefore been the key focus for research advances. Recently, a new research direction that builds on the momentum of Deep Learning for computer vision has produced promising results with image representations of Android byte-code. In this work, we postulate that other artifacts such as the binary (native) code and metadata/configuration files could be looked at to build more exhaustive representations of Android apps. We show that binary code and metadata files can also provide relevant information for Android malware detection, i.e., that they can allow to detect Malware that are not detected by models built only on bytecode. Furthermore, we investigate the potential benefits of combining all these artifacts into a unique representation with a strong signal for reasoning about maliciousness.","PeriodicalId":349977,"journal":{"name":"2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133633030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A First Step Towards Detecting Human Values-violating Defects in Android APIs","authors":"Conghui Li, Humphrey O. Obie, Hourieh Khalajzadeh","doi":"10.1109/ASEW52652.2021.00022","DOIUrl":"https://doi.org/10.1109/ASEW52652.2021.00022","url":null,"abstract":"Human values are an important aspect of life and should be supported in ubiquitous technologies such as mobile applications (apps). There has been a lot of focus on fixing certain kinds of violation of human values, especially privacy, accessibility, and security while other values such as pleasure, tradition, and humility have received little focus. In this paper, we investigate the relationship between human values and Android API services and developed algorithms to detect potential violation of these values. We evaluated our algorithms with a manually curated ground truthset resulting in a high performance, and applied the algorithms to 10,000 apps. Our results show a correlation between violation of values and the presence of viruses. Our results also show that apps with the lowest number of installations contain more violation of values and the frequency of the violation of values was highest in social apps.","PeriodicalId":349977,"journal":{"name":"2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130951260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward a Smell-aware Prediction Model for CI Build Failures","authors":"Islem Saidani, Ali Ouni","doi":"10.1109/ASEW52652.2021.00017","DOIUrl":"https://doi.org/10.1109/ASEW52652.2021.00017","url":null,"abstract":"During the last years, researchers have explored the potential factors behind Continuous integration (CI) build failures focusing mainly on metrics related to code changes, statistics about the project etc. However, code quality indicators such as the presence of bad smells have been rarely discussed in the context of CI. In this paper, we aim at investigating the extent to which CI build failures prediction can be improved by the detection of bad smells. Specifically, we evaluate the contribution of 28 well-known bad smells when added to BF-DETECTOR, an existing tool for CI build failures prediction. We conduct a case study on a dataset of 15,041 Travis CI builds extracted from five GitHub projects. The obtained results demonstrate the efficiency of the smell-aware prediction to improve the F1-score of BF-DETECTOR by 4% on average. In particular, we found that Excessive Parameter List (EPL), Sensitive Equality (SE) and Lazy Test (LT) are the most contributing to the prediction.","PeriodicalId":349977,"journal":{"name":"2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126804151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wajdi Aljedaani, F. Rustam, S. Ludi, Ali Ouni, Mohamed Wiem Mkaouer
{"title":"Learning Sentiment Analysis for Accessibility User Reviews","authors":"Wajdi Aljedaani, F. Rustam, S. Ludi, Ali Ouni, Mohamed Wiem Mkaouer","doi":"10.1109/ASEW52652.2021.00053","DOIUrl":"https://doi.org/10.1109/ASEW52652.2021.00053","url":null,"abstract":"Nowadays, people use different ways to express emotions and sentiments such as facial expressions, gestures, speech, and text. With the exponentially growing popularity of mobile applications (apps), accessibility apps have gained importance in recent years as it allows users with specific needs to use an app without many limitations. User reviews provide insightful information that helps for app evolution. Previously, work has been done on analyzing the accessibility in mobile applications using machine learning approaches. However, to the best of our knowledge, there is no work done using sentiment analysis approaches to understand better how users feel about accessibility in mobile apps. To address this gap, we propose a new approach on an accessibility reviews dataset, where we use two sentiment analyzers, i.e., TextBlob and VADER along with Term Frequency—Inverse Document Frequency (TF-IDF) and Bag-of-words (BoW) features for detecting the sentiment polarity of accessibility app reviews. We also applied six classifiers including, Logistic Regression, Support Vector, Extra Tree, Gaussian Naive Bayes, Gradient Boosting, and Ada Boost on both sentiments analyzers. Four statistical measures namely accuracy, precision, recall, and F1-score were used for evaluation. Our experimental evaluation shows that the TextBlob approach using BoW features achieves better results with accuracy of 0.86 than the VADER approach with accuracy of 0.82.","PeriodicalId":349977,"journal":{"name":"2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121669708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from the NLP-SEA 2021 Chairs","authors":"","doi":"10.1109/asew52652.2021.00012","DOIUrl":"https://doi.org/10.1109/asew52652.2021.00012","url":null,"abstract":"","PeriodicalId":349977,"journal":{"name":"2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131573413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sentiment Analysis of User Feedback on Business Processes","authors":"Amina Mustansir, Khurram Shahzad, M. K. Malik","doi":"10.1109/ASEW52652.2021.00048","DOIUrl":"https://doi.org/10.1109/ASEW52652.2021.00048","url":null,"abstract":"Business Process Management (BPM) is an established discipline that uses business processes for organizing the operations of an enterprise. The enterprises that embrace BPM continuously analyze their processes and improve them to achieve competitive edge. Consequently, a plethora of studies have developed contrasting approaches to analyze business processes. These approach vary from examining event logs of process-aware information systems to employing the data warehousing technology for analyzing the execution logs of business processes. In contrast to these classical approaches, this work proposes to combine two prominent domains, BPM and Natural Language Processing, for analyzing business processes. In particular, this study has proposed to perform sentiment analysis of end-user feedback on business processes to assess the satisfaction level of end-users. More specifically, firstly, a structured approach is used to develop a corpus of over 7000 user-feedback sentences. Secondly, these feedback sentences are annotated at three levels of classification, where, the first-level classification determines the relevance of a sentence to the process. Whereas, the second-level classifies the relevant sentences across four process performance dimensions, time, cost, quality and flexibility, and the third-level classifies the sentences into positive, negative, or neutral sentiments. Finally, 78 experiments are performed to determine the effectiveness of six supervised learning techniques and one state-of-the-art deep learning technique for the automatic classification of user feedback sentences at three levels of classifications. The results show that deep learning technique is most effective for the classification tasks.","PeriodicalId":349977,"journal":{"name":"2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116993103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}