{"title":"Comparing Local and Global Software Effort Estimation Models -- Reflections on a Systematic Review","authors":"Stephen G. MacDonell, M. Shepperd","doi":"10.1109/ESEM.2007.45","DOIUrl":"https://doi.org/10.1109/ESEM.2007.45","url":null,"abstract":"The availability of multi-organisation data sets has made it possible for individual organisations to build and apply management models, even if they do not have data of their own. In the absence of any data this may be a sensible option, driven by necessity. However, if both cross-company (or global) and within-company (or local) data are available, which should be used in preference? Several research papers have addressed this question but without any apparent convergence of results. We conduct a systematic review of empirical studies comparing global and local effort prediction systems. We located 10 relevant studies: 3 supported global models, 2 were equivocal and 5 supported local models. The studies do not have converging results. A contributing factor is that they have utilised different local and global data sets and different experimental designs thus there is substantial heterogeneity. We identify the need for common response variables and for common experimental and reporting protocols.","PeriodicalId":124420,"journal":{"name":"First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130688829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Context Distance Measurement to Analyze Results across Studies","authors":"D. Cruzes, V. Basili, F. Shull, M. Jino","doi":"10.1109/ESEM.2007.17","DOIUrl":"https://doi.org/10.1109/ESEM.2007.17","url":null,"abstract":"Providing robust decision support for software engineering (SE) requires the collection of data across multiple contexts so that one can begin to elicit the context variables that can influence the results of applying a technology. However, the task of comparing contexts is complex due to the large number of variables involved. This works extends a previous one in which we proposed a practical and rigorous process for identifying evidence and context information from SE papers. The current work proposes a specific template to collect context information from SE papers and an interactive approach to compare context information about these studies. It uses visualization and clustering algorithms to help the exploration of similarities and differences among empirical studies. This paper presents this approach and a feasibility study in which the approach is applied to cluster a set of papers that were independently grouped by experts.","PeriodicalId":124420,"journal":{"name":"First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132551922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating the Quality of Widely Used Software Products Using Software Reliability Growth Modeling: Case Study of an IBM Federated Database Project","authors":"P. Li, R. Nakagawa, R. Montroy","doi":"10.1109/ESEM.2007.67","DOIUrl":"https://doi.org/10.1109/ESEM.2007.67","url":null,"abstract":"Software producers can better manage the quality of their deployed software products using estimates of quality. Current best practices for making estimates are to use software reliability growth modeling (SRGM), which assumes that testing environments approximate deployment environments. This important assumption does not hold for widely used software products, which are operated in a wide variety of configurations under many different usage scenarios. However, the literature contains little empirical data on the impact of this violation of assumptions on the accuracy and the usefulness of predictions. In this paper, we report results and experiences using SRGM on an IBM federated database project. We examine defect data from 3 releases spanning approximately 9 years. We find SRGM to be of limited use to the project: absolute relative errors are at least 34%, and predictions are, at times, implausible. We discuss alternative approaches for estimating quality of widely used software products.","PeriodicalId":124420,"journal":{"name":"First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126158730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Liebchen, Bhekisipho Twala, M. Shepperd, M. Cartwright, Mark Stephens
{"title":"Filtering, Robust Filtering, Polishing: Techniques for Addressing Quality in Software Data","authors":"G. Liebchen, Bhekisipho Twala, M. Shepperd, M. Cartwright, Mark Stephens","doi":"10.1109/ESEM.2007.70","DOIUrl":"https://doi.org/10.1109/ESEM.2007.70","url":null,"abstract":"Data quality is an important aspect of empirical analysis. This paper compares three noise handling methods to assess the benefit of identifying and either filtering or editing problematic instances. We compare a 'do nothing' strategy with (i) filtering, (ii) robust filtering and (Hi) filtering followed by polishing. A problem is that it is not possible to determine whether an instance contains noise unless it has implausible values. Since we cannot determine the true overall noise level we use implausible val.ues as a proxy measure. In addition to the ability to identify implausible values, we use another proxy measure, the ability to fit a classification tree to the data. The interpretation is low misclassification rates imply low noise levels. We found that all three of our data quality techniques improve upon the 'do nothing' strategy, also that the filtering and polishing was the most effective technique for dealing with noise since we eliminated the fewest data and had the lowest misclassification rates. Unfortunately the polishing process introduces new implausible values. We believe consideration of data quality is an important aspect of empirical software engineering. We have shown that for one large and complex real world data set automated techniques can help isolate noisy instances and potentially polish the values to produce better quality data for the analyst. However this work is at a preliminary stage and it assumes that the proxy measures of lity are appropriate.","PeriodicalId":124420,"journal":{"name":"First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115232620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Agile Software Assurance: An Empirical Study","authors":"Noura Abbas","doi":"10.1109/ESEM.2007.71","DOIUrl":"https://doi.org/10.1109/ESEM.2007.71","url":null,"abstract":"This poster will describe an empirical study of agile software assurance. The main goal of this research is to study the quality of agile projects in order to help software development organizations to have deeper understanding of agile methods, principles and practices. Moreover, this research will help evaluating the quality of agile projects.","PeriodicalId":124420,"journal":{"name":"First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126163013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ciolkowski, J. Heidrich, Jürgen Münch, F. Simon, Mathias Radicke
{"title":"Evaluating Software Project Control Centers in Industrial Environments","authors":"M. Ciolkowski, J. Heidrich, Jürgen Münch, F. Simon, Mathias Radicke","doi":"10.1109/ESEM.2007.51","DOIUrl":"https://doi.org/10.1109/ESEM.2007.51","url":null,"abstract":"Many software development organizations still lack support for detecting and reacting to critical project states in order to achieve planned goals. One means to institutionalize project control, systematic quality assurance, and management support on the basis of measurement and explicit models is the establishment of so-called software project control centers. However, there is only little experience reported in the literature with respect to setting up and applying such control centers in industrial environments. One possible reason is the lack of appropriate evaluation instruments (such as validated questionnaires and appropriate analysis procedures). Therefore, we developed an initial measurement instrument to systematically collect experience with respect to the deployment and use of control centers. Our main research goal was to develop and evaluate the measurement instrument. The instrument is based on the technology acceptance model (TAM) and customized to project controlling. This article illustrates the application and evaluation of this measurement instrument in the context of industrial case studies and provides lessons learned for further improvement. In addition, related work and conclusions for future work are given.","PeriodicalId":124420,"journal":{"name":"First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126739613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Ackermann, F. Shull, R. Carbon, C. Denger, M. Lindvall
{"title":"Assessing the Quality Impact of Design Inspections","authors":"C. Ackermann, F. Shull, R. Carbon, C. Denger, M. Lindvall","doi":"10.1109/ESEM.2007.63","DOIUrl":"https://doi.org/10.1109/ESEM.2007.63","url":null,"abstract":"Inspections are widely used and studies have found them to be effective in uncovering defects. However, there is less data available regarding the impact of inspections on different defect types and almost no data quantifying the link between inspections and desired end product qualities. This paper addresses this issue by investigating whether design inspection checklists can be tailored so as to effectively target certain defect types without impairing the overall defect detection rate. The results show that the design inspection approach used here does uncover useful design quality issues and that the checklists can be effectively tailored for some types of defects.","PeriodicalId":124420,"journal":{"name":"First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007)","volume":"13 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114024777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Conte, Jobson Luiz Massolar da Silva, E. Mendes, G. Travassos
{"title":"Usability Evaluation Based on Web Design Perspectives","authors":"T. Conte, Jobson Luiz Massolar da Silva, E. Mendes, G. Travassos","doi":"10.1109/ESEM.2007.30","DOIUrl":"https://doi.org/10.1109/ESEM.2007.30","url":null,"abstract":"Given the growth in the number and size of Web Applications worldwide, Web quality assurance, and more specifically Web usability have become key success factors. Therefore, this work proposes a usability evaluation technique based on the combination of Web design perspectives adapted from existing literature, and heuristics. This new technique is assessed using a controlled experiment aimed at measuring the efficiency and effectiveness of our technique, in comparison to Nielsen's heuristic evaluation. Results indicated that our technique was significantly more effective than and as efficient as Nielsen's heuristic evaluation.","PeriodicalId":124420,"journal":{"name":"First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121356603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Approach to Outlier Detection of Software Measurement Data using the K-means Clustering Method","authors":"Kyung-A Yoon, Oh-Sung Kwon, Doo-Hwan Bae","doi":"10.1109/ESEM.2007.49","DOIUrl":"https://doi.org/10.1109/ESEM.2007.49","url":null,"abstract":"The quality of software measurement data affects the accuracy of project manager's decision making using estimation or prediction models and the understanding of real project status. During the software measurement implementation, the outlier which reduces the data quality is collected, however its detection is not easy. To cope with this problem, we propose an approach to outlier detection of software measurement data using the k-means clustering method in this work.","PeriodicalId":124420,"journal":{"name":"First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007)","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123389444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How Software Designs Decay: A Pilot Study of Pattern Evolution","authors":"C. Izurieta, J. Bieman","doi":"10.1109/ESEM.2007.55","DOIUrl":"https://doi.org/10.1109/ESEM.2007.55","url":null,"abstract":"A common belief is that software designs decay as systems evolve. This research examines the extent to which software designs actually decay by studying the aging of design patterns in successful object oriented systems. Aging of design patterns is measured using various types of decay indices developed for this research. Decay indices track the internal structural changes of a design pattern realization and the code that surrounds the realization. Hypotheses for each kind of decay are tested. We found that the original design pattern functionality remains, and pattern decay is due to the \"grime \", non-pattern code, that grows around the pattern realization.","PeriodicalId":124420,"journal":{"name":"First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127637374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}