{"title":"Design test process in component-based software engineering: an analysis of requirements scalability","authors":"Mariem Haoues, Asma Sellami, H. Ben-Abdallah","doi":"10.1145/2593868.2593877","DOIUrl":"https://doi.org/10.1145/2593868.2593877","url":null,"abstract":"Producing an acceptable product that will satisfy Functional User Requirements and customer expectations within the quality, time and budget constraints remains a challenge in software development projects. Component-based software development has been widely adopted in software engineering especially when requirements change. However, some challenges remain in this engineering approach when dealing with the scalability of user requirements. Among these challenges, this paper tackles those pertinent to the testing phase. It proposes a design test process in component-based software engineering using functional size measurement and traceability matrix. The proposed test process aims to decreasing the effort required in the testing phase while maintaining the testing efficiency in terms of error detection. It is illustrated through the \"Emergency Monitoring System\" case study.","PeriodicalId":103819,"journal":{"name":"Workshop on Emerging Trends in Software Metrics","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126160979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A replicated study on correlating agile team velocity measured in function and story points","authors":"Hennie Huijgens, R. Solingen","doi":"10.1145/2593868.2593874","DOIUrl":"https://doi.org/10.1145/2593868.2593874","url":null,"abstract":"Since the rapid growth of agile development methods for software engineering, more and more organizations measure the size of iterations, releases, and projects in both function points and story points. In 2011 Santana et al. performed a case study on the relation between function points and story points, from data collected in a Brazilian Government Agency. In this paper we replicate this study, using data collected in a Dutch banking organization. Based on a statistical correlation test we find that a comparison between function points and story points as measured in our repository indicates a moderate negative linear relation, where Santana et al. concluded a strong positive linear relation between both size metrics in their case study. Based on the outcome of our study we conclude that it appears too early to make generic claims on the relation between function points and story points; in fact FSM-theory seems to underpin that such a relationship is a spurious one.","PeriodicalId":103819,"journal":{"name":"Workshop on Emerging Trends in Software Metrics","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114124154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Folgieri, G. Barabino, G. Concas, Erika Corona, R. D. Lorenzi, M. Marchesi, Andrea Segni
{"title":"A revised web objects method to estimate web application development effort","authors":"R. Folgieri, G. Barabino, G. Concas, Erika Corona, R. D. Lorenzi, M. Marchesi, Andrea Segni","doi":"10.1145/1985374.1985388","DOIUrl":"https://doi.org/10.1145/1985374.1985388","url":null,"abstract":"We present a study of the effectiveness of estimating web application development effort using Function Points and Web Objects methods, and a method we propose - the Revised Web Objects (RWO). RWO is an upgrading of WO method, aimed to account for new web development styles and technologies. It also introduces an up-front classification of web applications according to their size, scope and technology, to further refine their effort estimation. These methods were applied to a data-set of 24 projects obtained by Datasiel spa, a mid-sized Italian company, focused on web application projects, showing that RWO performs statistically better than WO, and roughly in the same way as FP.","PeriodicalId":103819,"journal":{"name":"Workshop on Emerging Trends in Software Metrics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129553659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is my project's truck factor low?: theoretical and empirical considerations about the truck factor threshold","authors":"Marco Torchiano, F. Ricca, A. Marchetto","doi":"10.1145/1985374.1985379","DOIUrl":"https://doi.org/10.1145/1985374.1985379","url":null,"abstract":"The Truck Factor is a simple way, proposed by the agile community, to measure the system's knowledge distribution in a team of developers. It can be used to highlight potential project problems due to the inadequate distribution of the system knowledge. Notwithstanding its relevance, only few studies investigated the Truck Factor and proposed ways to efficiently measure, evaluate and use it. In particular, the effective use of the Truck Factor is limited by the lack of reliable thresholds.\u0000 In this preliminary paper, we present a theoretical model concerning the Truck Factor and, in particular, we investigate its use to define the maximum achievable Truck Factor value in a project. The relevance of such a value concerns the definition of a reliable threshold for the Truck Factor. Furthermore in the paper, we document an experiment in which we apply the proposed model to real software projects with the aim of comparing the maximum achievable value of the Truck Factor with the unique threshold proposed in literature. The preliminary outcome we achieved shows that the existing threshold has some limitations and problems.","PeriodicalId":103819,"journal":{"name":"Workshop on Emerging Trends in Software Metrics","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115477400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Program slicing-based cohesion measurement: the challenges of replicating studies using metrics","authors":"David Bowes, T. Hall, Andrew Kerr","doi":"10.1145/1985374.1985392","DOIUrl":"https://doi.org/10.1145/1985374.1985392","url":null,"abstract":"It is important to develop corpuses of data to test out the efficacy of using metrics. Replicated studies are an important contribution to corpuses of metrics data. There are few replicated studies using metrics reported in software engineering.\u0000 To contribute more data to the body of evidence on the use of novel program slicing-based cohesion metrics.\u0000 We replicate a very well regarded study by Meyers and Binkley [15, 16] which analyses the cohesion of open source projects using program slicing-based metrics.\u0000 Our results are very different from Meyers and Binkley's original results. This suggests that there are a variety of opportunities for inconsistently to creep into the collection and analysis of metrics data during replicated studies.\u0000 We conclude that researchers using metrics data must present their work with sufficient detail for replication to be possible. Without this detail it is difficult for subsequent researchers to accurately replicate a study such that consistent and reliable data can be added to a body of evidence.","PeriodicalId":103819,"journal":{"name":"Workshop on Emerging Trends in Software Metrics","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114772172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayse Tosun Misirli, Bora Caglayan, A. Miranskyy, A. Bener, Nuzio Ruffolo
{"title":"Different strokes for different folks: a case study on software metrics for different defect categories","authors":"Ayse Tosun Misirli, Bora Caglayan, A. Miranskyy, A. Bener, Nuzio Ruffolo","doi":"10.1145/1985374.1985386","DOIUrl":"https://doi.org/10.1145/1985374.1985386","url":null,"abstract":"Defect prediction has been evolved with variety of metric sets, and defect types. Researchers found code, churn, and network metrics as significant indicators of defects. However, all metric sets may not be informative for all defect categories such that only one metric type may represent majority of a defect category. Our previous study showed that defect category sensitive prediction models are more successful than general models, since each category has different characteristics in terms of metrics. We extend our previous work, and propose specialized prediction models using churn, code, and network metrics with respect to three defect categories. Results show that churn metrics are the best for predicting all defects. The strength of correlation for code and network metrics varies with defect category: Network metrics have higher correlations than code metrics for defects reported during functional testing and in the field, and vice versa for defects reported during system testing.","PeriodicalId":103819,"journal":{"name":"Workshop on Emerging Trends in Software Metrics","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129128266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human judgement and software metrics: vision for the future","authors":"C. Mair, M. Shepperd","doi":"10.1145/1985374.1985393","DOIUrl":"https://doi.org/10.1145/1985374.1985393","url":null,"abstract":"Background: There has been much research into building formal (metrics-based) prediction systems with the aim of improving resource estimation and planning of software projects. However the 'objectivity' of such systems is illusory in the sense that many inputs need themselves to be estimated by the software engineer.\u0000 Method: We review the uptake of past software project prediction research and identify relevant cognitive psychology research on expert behaviour. In particular we explore potential applications of recent metacognition research.\u0000 Results: We find the human aspect is largely ignored, despite the availability of many important results from cognitive psychology.\u0000 Conclusions: In order to increase the actual use of our metrics research e.g. effort prediction systems we need to have a more integrated view of how such research might be used and who might be using it. This leads to our belief that future research must be more holistic and inter-disciplinary.","PeriodicalId":103819,"journal":{"name":"Workshop on Emerging Trends in Software Metrics","volume":"289 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121327697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stability of Java interfaces: a preliminary investigation","authors":"J. Chow, E. Tempero","doi":"10.1145/1985374.1985385","DOIUrl":"https://doi.org/10.1145/1985374.1985385","url":null,"abstract":"The attribute of stability is regarded by some as an important attribute of software. Some claims regarding software design quality imply that what are called interfaces in Java are stable. This paper introduces some new metrics for investigating such claims, and presents some preliminary measurements from these metrics, which indicate that developers do not consistently develop stable interfaces.","PeriodicalId":103819,"journal":{"name":"Workshop on Emerging Trends in Software Metrics","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133557319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Concern-based cohesion as change proneness indicator: an initial empirical study","authors":"B. Silva, C. Sant'Anna, C. Chavez","doi":"10.1145/1985374.1985387","DOIUrl":"https://doi.org/10.1145/1985374.1985387","url":null,"abstract":"Structure-based cohesion metrics, such as the well-known Chidamber and Kemerer's Lack of Cohesion in Methods (LCOM), fail to capture the semantic notion of a software component's cohesion. Some researchers claim that it is one of the reasons they are not good indicators of change proneness. The Lack of Concern-based Cohesion metric (LCC) is an alternative cohesion metric which is centered on counting the number of concerns a component implements. A concern is any important concept, feature, property or area of interest of a system that we want to treat in a modular way. In this way, LCC focus on what really matters for assessing a component's cohesion - the amount of responsibilities placed on them. Our aim in this paper is to present an initial investigation about the applicability of this concern-based cohesion metric as a change proneness indicator. We also checked if this metric has a correlation with efferent coupling. An initial empirical assessment work was done with two small to medium-sized systems. Our results indicated a moderate to strong correlation between LCC and change proneness, and also a strong correlation between LCC and efferent coupling.","PeriodicalId":103819,"journal":{"name":"Workshop on Emerging Trends in Software Metrics","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132150200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Which code construct metrics are symptoms of post release failures?","authors":"M. Nagappan, Brendan Murphy, M. Vouk","doi":"10.1145/1985374.1985389","DOIUrl":"https://doi.org/10.1145/1985374.1985389","url":null,"abstract":"Software metrics, such as code complexity metrics and code churn metrics, are used to predict failures. In this paper we study a specific set of metrics called code construct metrics and relate them to post release failures. We use the values of the code construct metrics for each file to characterize that file. We analyze the code construct metrics along with the post release failure data on the files (that splits the files into two classes: files with post release failures and files without post release failures). In our analysis we compare a file with post release failure to a set of files without post release failures, that have similar characteristics. In our comparison we identify which code construct metric, more often than the others, differs the most between these two classes of files. The goal of our research is to find out which code construct metrics can perhaps be used as symptoms of post release failures. In this paper we analyzed the code construct metrics of Eclipse 2.0, 2.1, and 3.0. Our results indicate that MethodInvocation, QualifiedName, and SimpleName, are the code constructs that differentiates the two classes of files the most and hence are the key symptoms/indicators of a file with post release failures in these versions of Eclipse.","PeriodicalId":103819,"journal":{"name":"Workshop on Emerging Trends in Software Metrics","volume":"333 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133630895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}