{"title":"Agile customer engagement: a longitudinal qualitative case study","authors":"G. Hanssen, Tor Erlend Fægri","doi":"10.1145/1159733.1159759","DOIUrl":"https://doi.org/10.1145/1159733.1159759","url":null,"abstract":"In this longitudinal case study we have followed a small software product company that has turned from a waterfall-like process to evolutionary project management (Evo). The most prominent feature of the new process is the close engagement of customers. We have interviewed both internals and customers to investigate the practicalities, costs, gains and prerequisites of such a transition. We have gathered data from a period of two years covering four consecutive release projects using the new process and analyzed the material in detail. Our findings implicate that close customer engagement does give certain benefits but that it comes with a cost and needs careful attention to management.","PeriodicalId":201305,"journal":{"name":"International Symposium on Empirical Software Engineering","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114443875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A follow up study of the effect of personality on the performance of software engineering teams","authors":"J. Karn, A. J. Cowling","doi":"10.1145/1159733.1159769","DOIUrl":"https://doi.org/10.1145/1159733.1159769","url":null,"abstract":"This paper describes ethnographic observations and analysis of the performance of student teams working on year-long software projects (2004-2005 UK academic year) for industrial clients. Personality types were measured using an online version of the Myers Briggs Type Indicator (MBTI), as a basis for studying how individuals interacted within the teams, and the effects of disruptive issues on the quality of work produced by the team. The behavior of the observed teams is analyzed and the results compared with those from the previous year's (2003-2004) research, also carried out on student teams. A significant finding in 2003-2004 was that issues which teams did not discuss adequately caused more problems for the quality of work than issues which produced actual disruption within the team; the results from 2004-2005 differ in that actual disruptions proved most damaging to the teams involved.","PeriodicalId":201305,"journal":{"name":"International Symposium on Empirical Software Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130338964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A goal question metric based approach for efficient measurement framework definition","authors":"Patrik Berander, Per Jönsson","doi":"10.1145/1159733.1159781","DOIUrl":"https://doi.org/10.1145/1159733.1159781","url":null,"abstract":"In software engineering, measurements can be used to monitor, understand and improve software processes as well as products and resource utilization. Commonly, measurement frameworks are ambitious undertakings that require large data collection and analysis efforts. Unfortunately, such frameworks usually become too extensive, resulting in graveyards with a lot of data never being analyzed and used. One well-known way for defining measurements that tries to focus on the most important ones is the Goal Question Metric (GQM) approach. GQM focuses on eliciting goals and questions as drivers for finding metrics necessary to collect. This means that there is always a purpose with the defined measurements. GQM solves part of the problem mentioned, but it is still possible that the measurements defined become too many. In this paper, an extended GQM approach is presented that facilitates identification of and focuses on the most important measurements for an organization. The approach makes it possible to be more efficient when it comes to the application of measurement frameworks. Besides the presentation of the approach, two case studies are presented where the suitability of the approach is investigated in an industrial context. In the case studies, the suggested approach has been applied on two different processes within software development: change management and requirements engineering. From these case studies, a number of experiences are highlighted and discussed as issues to consider when applying the approach.","PeriodicalId":201305,"journal":{"name":"International Symposium on Empirical Software Engineering","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133611372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Empirical estimates of software availability of deployed systems","authors":"A. Mockus","doi":"10.1145/1159733.1159767","DOIUrl":"https://doi.org/10.1145/1159733.1159767","url":null,"abstract":"We consider empirical evaluation of the availability of the deployed software. Evaluation of real systems is more realistic, more accurate, and provides higher level of confidence than simulations, testing, or models. We process and model information gathered from a variety of operational and service support systems to obtain estimates of software reliability and availability. The three principal quantities are the total runtime, the number of outages, and the duration of outages. We consider methods to assess the quality of information in customer support systems, discuss advantages and disadvantages of various sources, consider methods to deal with missing data, and ways to construct bounds on measures that are not directly available. We propose a method to assess empirically software availability and reliability based on information from operational customer support and inventory systems and use a case study of a large communications system to investigate factors affecting software reliability. We find large variations among platforms and releases and find the failure rate to vary over time.","PeriodicalId":201305,"journal":{"name":"International Symposium on Empirical Software Engineering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130572338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Documenting design decision rationale to improve individual and team design decision making: an experimental evaluation","authors":"D. Falessi, G. Cantone, Martin Becker","doi":"10.1145/1159733.1159755","DOIUrl":"https://doi.org/10.1145/1159733.1159755","url":null,"abstract":"Individual and team decision-making have crucial influence on the level of success of every software project. Even though several studies were already conducted, which concerned design decision rationale documentation approaches, a few of them focused on performances and evaluated them in laboratory. This paper proposes a technique to document design decision rationale, and evaluates experimentally the impact such a technique has on effectiveness and efficiency of individual/team decision-making in presence of requirement changes. The study was conducted as a controlled experiment. Fifty post-graduate Master students performed in the role of experiment subjects. Documented design decisions regarding the Ambient Intelligence paradigm constituted the experiment objects. Main results of the experiment show that, for both individual and team-based decision-making, effectiveness significantly improves, while efficiency remains unaltered, when decision-makers are allowed to use, rather not use, the proposed design rationale documentation technique.","PeriodicalId":201305,"journal":{"name":"International Symposium on Empirical Software Engineering","volume":"43 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132331236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying domain-specific defect classes using inspections and change history","authors":"Taiga Nakamura, L. Hochstein, V. Basili","doi":"10.1145/1159733.1159785","DOIUrl":"https://doi.org/10.1145/1159733.1159785","url":null,"abstract":"We present an iterative, reading-based methodology for analyzing defects in source code when change history is available. Our bottom-up approach can be applied to build knowledge of recurring defects in a specific domain, even if other sources of defect data such as defect reports and change requests are unavailable, incomplete or at the wrong level of abstraction for the purposes of the defect analysis. After defining the methodology, we present the results of an empirical study where our method was applied to analyze defects in parallel programs which use the MPI (Message Passing Interface) library to express parallelism. This library is often used in the domain of high performance computing, where there is much discussion but little empirical data about the frequency and severity of defect types. Preliminary results indicate the methodology is feasible and can provide insights into the nature of real defects. We present the results, derived hypothesis, and lessons learned.","PeriodicalId":201305,"journal":{"name":"International Symposium on Empirical Software Engineering","volume":"299 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124280032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manoel G. Mendonça, D. Cruzes, Josemeire Dias, M. C. F. D. Oliveira
{"title":"Using observational pilot studies to test and improve lab packages","authors":"Manoel G. Mendonça, D. Cruzes, Josemeire Dias, M. C. F. D. Oliveira","doi":"10.1145/1159733.1159743","DOIUrl":"https://doi.org/10.1145/1159733.1159743","url":null,"abstract":"Controlled experiments are a key approach to evaluate and evolve our understanding of software engineering technologies. However, defining and running a controlled experiment is a difficult and errorprone task. This paper argues that one can significantly reduce the risks associated with defining a new controlled experiment by running a set of well-planned observational pilot studies aimed at improving the experimental material. It gives the steps for such an approach and illustrates it with a case study that shows the definition of an experiment to evaluate inspection techniques for information visualization tools through a set of four observational studies and one experimental trial. Based on the lessons learned we present some guidelines on how to test and improve experimental material in such way.","PeriodicalId":201305,"journal":{"name":"International Symposium on Empirical Software Engineering","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127701196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Canfora, A. Cimitile, Félix García, M. Piattini, C. A. Visaggio
{"title":"Evaluating advantages of test driven development: a controlled experiment with professionals","authors":"G. Canfora, A. Cimitile, Félix García, M. Piattini, C. A. Visaggio","doi":"10.1145/1159733.1159788","DOIUrl":"https://doi.org/10.1145/1159733.1159788","url":null,"abstract":"Test driven development (TDD) is gaining interest among practitioners and researchers: it promises to increase the quality of the code. Even if TDD is considered a development practice, it relies on the use of unit testing. For this reason, it could be an alternative to the testing after coding (TAC), which is the usual approach to run and execute unit tests after having written the code. We wondered which are the differences between the two practices, from the standpoint of quality and productivity. In order to answer our research question, we carried out an experiment in a Spanish Software House. The results suggest that TDD improves the unit testing but slows down the overall process.","PeriodicalId":201305,"journal":{"name":"International Symposium on Empirical Software Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115635326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PBR vs. checklist: a replication in the n-fold inspection context","authors":"Lulu He, Jeffrey C. Carver","doi":"10.1145/1159733.1159750","DOIUrl":"https://doi.org/10.1145/1159733.1159750","url":null,"abstract":"Inspection is considered a powerful method to check software documents for defects. Many published work shows that inspections in requirements specification phase are particularly effective and efficient. Perspective-Based Reading (PBR) is one of the systematic techniques to support defect detection in requirements documents. In this paper we describe an experiment to validate the effectiveness of PBR in a meeting-based N-fold inspection. Our goals were: (1) re-test the hypothesis of the original experiment that PBR helps to increase individual and team defect detection effectiveness compared to an checklist approach; (2) investigate the different impact of PBR and checklist on the effectiveness of N-fold team meeting; and (3) investigate some interesting characteristics of PBR (e.g. the relationship between background experiences and performance of the subjects). The results of the study showed that PBR was significantly more effective than checklist (supporting the original study). We also found that the team meeting is much more important for checklist teams, based on the number of meeting gains and the number of false defects eliminated. Finally, we found that teams using the PBR techniques have less overlap in their defect detection than those using checklist. The ultimate goal is to provide best practices (guidance) for applying PBR in software inspection and also some advice for PBR (or software inspections) process improvement.","PeriodicalId":201305,"journal":{"name":"International Symposium on Empirical Software Engineering","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121403554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting component failures at design time","authors":"Adrian Schröter, Thomas Zimmermann, A. Zeller","doi":"10.1145/1159733.1159739","DOIUrl":"https://doi.org/10.1145/1159733.1159739","url":null,"abstract":"How do design decisions impact the quality of the resulting software? In an empirical study of 52 ECLIPSE plug-ins, we found that the software design as well as past failure history, can be used to build models which accurately predict failure-prone components in new programs. Our prediction only requires usage relationships between components, which are typically defined in the design phase; thus, designers can easily explore and assess design alternatives in terms of predicted quality. In the ECLIPSE study, 90% of the 5% most failure-prone components, as predicted by our model from design data, turned out to actually produce failures later; a random guess would have predicted only 33%.","PeriodicalId":201305,"journal":{"name":"International Symposium on Empirical Software Engineering","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128406708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}