D. Taibi, Valentina Lenarduzzi, Philipp Diebold, M. I. Lunesu
{"title":"Operationalizing the Experience Factory for Effort Estimation in Agile Processes","authors":"D. Taibi, Valentina Lenarduzzi, Philipp Diebold, M. I. Lunesu","doi":"10.1145/3084226.3084240","DOIUrl":"https://doi.org/10.1145/3084226.3084240","url":null,"abstract":"[Background] The effort required to systematically collect historical data is not always allocable in agile processes and historical data management is usually delegated to the developers' experience, who need to remember previous project details. However, even if well trained, developers cannot precisely remember a huge number of details, resulting in wrong decisions being made during the development process. [Aims] The goal of this paper is to operationalize the Experience Factory in an agile way, i.e., defining a strategy for collecting historical project data using an agile approach. [Method] We provide a mechanism for understanding whether a measure must be collected or not, based on the Return on Invested Time (ROIT). In order to validate this approach, we instantiated the factory with an exploratory case study, comparing four projects that did not use our approach with one project that used it after 12 weeks out of 37 and two projects that used it from the beginning. [Results] The proposed approach helps developers to constantly improve their estimation accuracy with a very positive ROIT of the collected measure. [Conclusions] From this first experience, we can conclude that the Experience Factory can be applied effectively to agile processes, supporting developers in improving their performance and reducing potential decision mistakes.","PeriodicalId":192290,"journal":{"name":"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117227440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How to Reduce Software Development Cost with Personnel Assignment Optimization: Exemplary Improvement on the Hungarian Algorithm","authors":"Chong Wang, Zhong Luo, Luxin Lin, M. Daneva","doi":"10.1145/3084226.3084245","DOIUrl":"https://doi.org/10.1145/3084226.3084245","url":null,"abstract":"Reasonable reduction and controlling of software cost is always a challenge for software companies. To estimate software development cost more precisely, current research effort is focused on improving the measurement of software size or complexity by combining or adjusting key cost drivers, such as function points and other observable project context factors. However, personnel factors are seldom investigated or treated in depth as a way to reduce the estimated software development cost. On the premise that a software project is decomposed in a number of tasks, and that predetermined developers are available as resources for it, this paper intends to optimize the allocation of available personnel for lower development cost. In this research, we consider the problem of allocating competent developers to suitable tasks as an unbalanced personnel assignment problem, and improve the traditional Hungarian Algorithm by applying three strategies to find optimal personnel allocation solutions for diverse requirements. Moreover, the performance of our improved algorithms is evaluated and compared through a series of experiments on simulation datasets to identify and validate the measurement indicators and influence factors of their performance.","PeriodicalId":192290,"journal":{"name":"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122447332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guidelines for utilizing change impact analysis when estimating effort in agile software development","authors":"Binish Tanveer","doi":"10.1145/3084226.3084284","DOIUrl":"https://doi.org/10.1145/3084226.3084284","url":null,"abstract":"In agile development context, estimating effort of constantly evolving requirements is challenging. Moreover, expert judgment which is the most used estimation method in this context is prone to bias, utilizes limited information and affects the accuracy of estimates. In order to overcome these shortcomings, consideration of information like complexity and impact of changes (requirements) on the underlying system have been proposed in our earlier study. We introduced an improvement framework that integrates change impact analysis with effort estimation methods. In this paper, we are proposing guidelines on how to integrate and operationalize the impact analysis while estimating effort through utilizing our framework in an agile development context. Operationalization of this framework will provide practitioners with additional information about the impact and support them in making informed decisions regarding effort estimation. It will increase transparency in the whole estimation process at the same time.","PeriodicalId":192290,"journal":{"name":"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129595943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Izquierdo-Cortazar, Nelson Sekitoleko, Jesus M. Gonzalez-Barahona, Lars Kurth
{"title":"Using Metrics to Track Code Review Performance","authors":"Daniel Izquierdo-Cortazar, Nelson Sekitoleko, Jesus M. Gonzalez-Barahona, Lars Kurth","doi":"10.1145/3084226.3084247","DOIUrl":"https://doi.org/10.1145/3084226.3084247","url":null,"abstract":"During 2015, some members of the Xen Project Advisory Board became worried about the performance of their code review process. The Xen Project is a free, open source software project developing one of the most popular virtualization platforms in the industry. They use a pre-commit peer review process similar to that in the Linux kernel, based on email messages. They had observed a large increase over time in the number of messages related to code review, and were worried about how this could be a signal of problems with their code review process. To address these concerns, we designed and conducted, with their continuous feedback, a detailed analysis focused on finding these problems, if any. During the study, we dealt with the methodological problems of Linux-like code review, and with the deeper issue of finding metrics that could uncover the problems they were worried about. For having a benchmark, we run the same analysis on a similar project, which uses very similar code review practices: the Linux Netdev (Netdev) project. As a result, we learned how in fact the Xen Project had some problems, but at the moment of the analysis those were already under control. We found as well how different the Xen and Netdev projects were behaving with respect to code review performance, despite being so similar from many points of view. In this paper we show the results of both analyses, and propose a comprehensive methodology, fully automated, to study Linux-style code review. We discuss also the problems of getting significant metrics to track improvements or detect problems in this kind of code review.","PeriodicalId":192290,"journal":{"name":"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121991560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ASM.br: A Template for Specifying Indicators","authors":"Sylvio Bonelli, Gleison Santos, M. Barcellos","doi":"10.1145/3084226.3084280","DOIUrl":"https://doi.org/10.1145/3084226.3084280","url":null,"abstract":"Measurement is a crucial practice for software organizations to monitor projects and improve processes. It defines indicators and provides information to support decision-making. Defining indicators is not a trivial task. Although the literature proposes several indicators, there is not a consensus about how to specify them. This paper presents ASM.br (Assistance for Software Measurement based on relationships), a template for specifying indicators by using a one-page form in which textual and graphical information is recorded and the relationships between indicators and between them and goals are put explicitly. By using ASM.br, indicators are recorded in a standardized way, easing understanding and reuse. ASM.br was applied in a software organization and the results suggests its feasibility and utility.","PeriodicalId":192290,"journal":{"name":"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126670161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Garousi, M. Felderer, M. Kuhrmann, Kadir Herkiloglu
{"title":"What industry wants from academia in software testing?: Hearing practitioners' opinions","authors":"V. Garousi, M. Felderer, M. Kuhrmann, Kadir Herkiloglu","doi":"10.1145/3084226.3084264","DOIUrl":"https://doi.org/10.1145/3084226.3084264","url":null,"abstract":"The level of industry-academia collaboration (IAC) in software engineering in general and in software testing in particular is quite low. Many researchers and practitioners are not collaborating with the \"other side\" to solve industrial problems. To shed light on the above issue and to characterize precisely what industry wants from academia in software testing, we solicited practitioners' opinions on their challenges in different testing activities and also the particularly relevant topics that they want the research community to work on. This short paper aims to draw the community's attention to the important issue of strengthening IAC with the hope of more IAC in software testing in the areas of most importance to the industry.","PeriodicalId":192290,"journal":{"name":"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125155052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preliminary Study on Applying Semi-Supervised Learning to App Store Analysis","authors":"Roger Deocadez, R. Harrison, Daniel Rodríguez","doi":"10.1145/3084226.3084285","DOIUrl":"https://doi.org/10.1145/3084226.3084285","url":null,"abstract":"Semi-Supervised Learning (SSL) is a data mining technique which comes between supervised and unsupervised techniques, and is useful when a small number of instances in a dataset are labelled but a lot of unlabelled data is also available. This is the case with user reviews in application stores such as the Apple App Store or Google Play, where a vast amount of reviews are available but classifying them into categories such as bug related review or feature request is expensive or at least labor intensive. SSL techniques are well-suited to this problem as classifying reviews not only takes time and effort, but may also be unnecessary. In this work, we analyse SSL techniques to show their viability and their capabilities in a dataset of reviews collected from the App Store for both transductive (predicting existing instance labels during training) and inductive (predicting labels on unseen future data) performance.","PeriodicalId":192290,"journal":{"name":"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121124890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Diebold, M. Galster, A. Rainer, Sherlock A. Licorish
{"title":"Interactive Posters: An Alternative to Collect Practitioners' Experience","authors":"Philipp Diebold, M. Galster, A. Rainer, Sherlock A. Licorish","doi":"10.1145/3084226.3084272","DOIUrl":"https://doi.org/10.1145/3084226.3084272","url":null,"abstract":"Context: The validity of survey-based research depends on, amongst other considerations, the number and validity of obtained data points. As with any empirical study that involves practitioners, collecting data via surveys is difficult. Objectives: We report our experiences derived while conducting an industry survey on the impact of agile practices on software process quality. Method: After unsuccessfully trying to collect data with an online questionnaire, we used an interactive approach with posters at practitioners-focussed software engineering community events to aid data collection. Results: We present a list of lessons learnt. In particular, the poster-based data collection approach provided utility, for both gathering a large amount of responses and facilitating follow-up interactions with study participants. Conclusion: Our experiences in this work may help those facing challenges associated with obtaining responses from practitioners through the use of potentially complex questionnaires.","PeriodicalId":192290,"journal":{"name":"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering","volume":"522 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126184485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Waldemar Ferreira, M. T. Baldassarre, S. Soares, Bruno Cartaxo, G. Visaggio
{"title":"A Comparative Study of Model-Driven Approaches For Scoping and Planning Experiments","authors":"Waldemar Ferreira, M. T. Baldassarre, S. Soares, Bruno Cartaxo, G. Visaggio","doi":"10.1145/3084226.3084258","DOIUrl":"https://doi.org/10.1145/3084226.3084258","url":null,"abstract":"Context: Through the years researchers have proposed several approaches to foster the growth and quality of experiments in Software Engineering. Among these approaches, there are some initiatives that rely on tool support for specifying controlled experiments. Goal: This paper reports results from a study, which aims to organize, analyze and outline the specifications of each initiative through a comparative analysis. Method: Specifications of each initiative have been compared through a comparative analysis, carried out according to eight criteria: (i) standard empirical concepts, (ii) goals and targets, (iii) involved variables, (iv) subject description, (v) design of experiment, (vi) tasks and activities, (vii) instruments and measurements, and (viii) the threats to research validity. Results: The results show that, among the tools currently existing and used in literature, the eSEE (Experimental Software Engineering Environment) is a complete model. However, it is also the most complex. In the other hand, the most flexible one is Experiment DSL. Conclusion: Based on our results, the currently existing solutions have strengths and weaknesses that should address efforts to make improvements in this area. In principal, our general suggestion is to place emphasis on methodological quality âĂŞ more than on method quantity.","PeriodicalId":192290,"journal":{"name":"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129409806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A quasi-experiment to evaluate the impact of mental fatigue on study selection process","authors":"Ricardo Britto, M. Usman, N. Minhas","doi":"10.1145/3084226.3084274","DOIUrl":"https://doi.org/10.1145/3084226.3084274","url":null,"abstract":"Context: Existing empirical evidence indicates that loss of alertness associated with mental fatigue is highly correlated with fluctuations in the performance of people carrying out auditory tasks. In software engineering research, mental fatigue may affect the results of study selection (an auditory task) when conducting secondary studies such as systematic literature reviews or systematic mapping studies. However, to date there is no empirical study that reports an in-depth investigation about the relationship between mental fatigue and researchers' selection decisions during study selection process. Objective: The main objective of this paper is to report the design and preliminary results of an investigation about the impact of mental fatigue on the study selection process of secondary studies. Method: We designed and piloted a quasi-experiment. Results: The preliminary results do not indicate that mental fatigue negatively impacts the correctness of selection decision and confidence. However, it is important to note that the preliminary results are only based on six subjects. Conclusion: This paper brings awareness about the role of mental fatigue in the conduction of secondary studies. Although the preliminary results do not indicate any meaningful relationship, we believe that it is worthwhile to continue the research, by adding more subjects, and also revising the design of the reported quasi-experiment.","PeriodicalId":192290,"journal":{"name":"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132448593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}