{"title":"Function point measurement tool for UML design specification","authors":"Takuya Uemura, S. Kusumoto, Katsuro Inoue","doi":"10.1109/METRIC.1999.809727","DOIUrl":"https://doi.org/10.1109/METRIC.1999.809727","url":null,"abstract":"Function point analysis (FPA) was originally proposed to help measure the size of a computerized business information system. It is now widely used in actual software development. However, it has been reported that, since function point counting involves judgment on the part of the counter, some differences would be caused between copies of the same product, even within the same organization. In this paper, we propose detailed FPA measurement rules for design specifications based on UML (Unified Modeling Language) and we develop a function point measurement tool, whose input products are design specifications on Rational Rose. We have also applied the tool in an actual design specification and have examined the difference between the values obtained using the tool and those given by an FPA specialist. The results show the applicability of our tool.","PeriodicalId":372331,"journal":{"name":"Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130155231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring domain engineering effects on software change cost","authors":"Harvey P. Siy, A. Mockus","doi":"10.1109/METRIC.1999.809751","DOIUrl":"https://doi.org/10.1109/METRIC.1999.809751","url":null,"abstract":"Domain engineering (DE) is an increasingly popular process for efficiently producing software. DE uses detailed knowledge of a particular application domain to define rigorously a family of software products within that domain. We describe a methodology for precise quantitative measurement of DE impact on software change efforts. The methodology employs measures of small software changes to determine the effect of DE. We illustrate this approach in a detailed case study of DE in a telecommunications product. In the particular case the change effort was dramatically reduced. The methodology can precisely measure cost savings in change effort and is simple and inexpensive since it relies on information automatically collected by version control systems.","PeriodicalId":372331,"journal":{"name":"Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123931852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Khoshgoftaar, E. B. Allen, Xiaojin Yuan, W. Jones, J. Hudepohl
{"title":"Assessing uncertain predictions of software quality","authors":"T. Khoshgoftaar, E. B. Allen, Xiaojin Yuan, W. Jones, J. Hudepohl","doi":"10.1109/METRIC.1999.809737","DOIUrl":"https://doi.org/10.1109/METRIC.1999.809737","url":null,"abstract":"Many development organizations try to minimize faults in software as a means for improving customer satisfaction. Assuring high software quality often entails time-consuming and costly development processes. A software quality model based on software metrics can be used to guide enhancement efforts by predicting which modules are fault-prone. The paper presents a way to determine which predictions by a classification tree should be considered uncertain. We conducted a case study of a large legacy telecommunications system. One release was the basis for the training data set, and the subsequent release was the basis for the evaluation data set. We built a classification tree using the TREEDISC algorithm, which is based on chi-squared tests of contingency tables. The model predicted whether a module was likely to have faults discovered by customers, or not, based on software product, process, and execution metrics. We simulated practical use of the model by classifying the modules in the evaluation data set. The model achieved useful accuracy, in spite of the very small proportion of fault-prone modules in the system. We assessed whether the classes assigned to the leaves were appropriate by examining the details of the full tree, and found sizable subsets of modules with substantially uncertain classification. Discovering which modules have uncertain classifications allows sophisticated enhancement strategies to resolve uncertainties. Moreover, TREEDISC is especially well suited to identifying uncertain classifications.","PeriodicalId":372331,"journal":{"name":"Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131825506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An empirical study into the use of measurement to support OO design evaluation","authors":"Colin Kirsopp, M. Shepperd, S. Webster","doi":"10.1109/METRIC.1999.809744","DOIUrl":"https://doi.org/10.1109/METRIC.1999.809744","url":null,"abstract":"This paper describes a case study into using measurement to aid design evaluation. The study was carried out as an assignment by final year B.Sc. students from an I.T. course. Approximately half of these were part time students working full time in industry and the others were full time students who had already undertaken a one year industrial placement. After some preparatory tuition, the subjects were given a small system (15 classes) written in Java and an analysis tool to enable them to apply a wide variety of metrics to this system. They were then asked to use measurement results and design heuristics to help them critique the design of the system. The main results from this work were that: measurement was found to be a useful aid to design evaluation; following the study, most subjects said that they would now be more inclined to use measurement than they there previously; support is needed for measurement selection as well as measurement collection; there were significant differences in the way the task was carried out between the full time students and the industrial subjects.","PeriodicalId":372331,"journal":{"name":"Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130506665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic metrics for object oriented designs","authors":"S. Yacoub, H. Ammar, Tom Robinson","doi":"10.1109/METRIC.1999.809725","DOIUrl":"https://doi.org/10.1109/METRIC.1999.809725","url":null,"abstract":"As object-oriented (OO) analysis and design techniques become more widely used, the demand on assessing the quality of OO designs increases substantially. Recently, there has been much research effort devoted to developing and empirically validating metrics for OO design quality. Complexity, coupling, and cohesion have received a considerable interest in the field. Despite the rich body of research and practice in developing design quality metrics, there has been less emphasis on dynamic metrics for OO designs. The complex dynamic behavior of many real-time applications motivates a shift in interest from traditional static metrics to dynamic metrics. This paper addresses the problem of measuring the quality of OO designs using dynamic metrics. We present a metrics suite to measure the quality of designs at an early development phase. The suite consists of metrics for dynamic complexity and object coupling based on execution scenarios. The proposed measures are obtained from executable design models. We apply the dynamic metrics to assess the quality of a pacemaker application. Results from the case study are used to compare static metrics to the proposed dynamic metrics and hence identify the need for empirical studies to explore the dependency of design quality on each.","PeriodicalId":372331,"journal":{"name":"Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117223130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Determinants of success in software measurement programs: initial results","authors":"Dennis R. Goldenson, A. Gopal, T. Mukhopadhyay","doi":"10.1109/METRIC.1999.809721","DOIUrl":"https://doi.org/10.1109/METRIC.1999.809721","url":null,"abstract":"While a great deal is known about technical issues of data gathering and applied statistics, less is known about what it takes to implement a successful software measurement program. Indeed, a good deal of anecdotal evidence suggests that such efforts often fail. In this paper, we report the initial results from a large-scale survey of practitioners and users of software measurement programs. A preliminary multivariate analysis examines differences in the use of software measurement results in organizational decision making. Three variables (alignment with intended users, management commitment and use of analytic methods) account for two thirds of the observed variance.","PeriodicalId":372331,"journal":{"name":"Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126101288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analyzing change effort in software during development","authors":"W. Evanco","doi":"10.1109/METRIC.1999.809739","DOIUrl":"https://doi.org/10.1109/METRIC.1999.809739","url":null,"abstract":"We develop ordinal response models to explain the effort associated with non-defect changes of software during development. The explanatory variables include the extent of the change, the change type, and the internal complexity of the software components undergoing the change. The models are calibrated on the basis of a single software system and are then validated on two additional systems.","PeriodicalId":372331,"journal":{"name":"Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121418385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting project risk from architecture reviews","authors":"E. Weyuker","doi":"10.1109/METRIC.1999.809729","DOIUrl":"https://doi.org/10.1109/METRIC.1999.809729","url":null,"abstract":"A metric is proposed to predict the risk of a project's failure. It is a simplification of a metric proposed by A. Avritzer et al. (Proc. 5th Internat. Symp. on Software Metrics, pp. 4-10, Nov. 1998). The metric is based on findings made during an architecture review of the project. An empirical study involving 36 large industrial telecommunications projects is included, describing our experience with using this metric.","PeriodicalId":372331,"journal":{"name":"Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129911159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Benchmarking COTS projects using data envelopment analysis","authors":"I. Myrtveit, E. Stensrud","doi":"10.1109/METRIC.1999.809748","DOIUrl":"https://doi.org/10.1109/METRIC.1999.809748","url":null,"abstract":"In Ernst & Young and Andersen Consulting, two of the \"big five\", there is a continuous search for better methods to measure and compare project performance of multi-dimensional COTS software projects. We propose using Data Envelopment Analysis (DEA) with a Variable Returns to Scale (VRS) model. First, we discuss and illustrate this method by analyzing Albrecht-Gaffney's two-dimensional dataset. Next, we review previous empirical studies using DEA showing that several studies have used DEA where simpler methods could have been used. Finally, we apply DEA to a multi-dimensional dataset of 30 industrial COTS software projects extracted from a benchmarking database in Andersen Consulting. Our main conclusion is that DEA is an applicable method, albeit not without shortcomings, for comparing the productivity of COTS software projects, and that it, therefore, merits further research. However, for two-dimensional datasets this method is unnecessary complex, and there exists other, simpler alternatives. Also, the results support our assumption of increasing as well as decreasing returns to scale for this dataset. Thus, the VRS model provides more reasonable and fair comparisons of project performance than a Constant Returns to Scale (CRS) model. Finally, this study suggests that DEA used together with methods for hypothesis testing may be a useful technique for assessing the effect of alleged process improvements.","PeriodicalId":372331,"journal":{"name":"Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403)","volume":"128 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130009163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Schneidewind, W. Jones, T. Khoshgoftaar, P. Oman, G. Stark
{"title":"Can metrics and models be applied across multiple releases or projects?","authors":"N. Schneidewind, W. Jones, T. Khoshgoftaar, P. Oman, G. Stark","doi":"10.1109/METRIC.1999.809753","DOIUrl":"https://doi.org/10.1109/METRIC.1999.809753","url":null,"abstract":"The issue of the applicability of metrics across multiple releases or projects is of great importance in measurement. If the same metrics can be applied across multiple releases or projects, significant cost savings can be realized in the management and implementation of the measurement program.For example, if one set of metrics can be identified on release n that are applicable on release n+l,..., N, the following cost savings would be achieved: Simplified management measurement program. of the Reduced metrics analysis time resulting from the fact that new sets of metrics do not have to be identified and validated for each release or project. Reduced training requirements because personnel do not have to be trained in several sets of metrics. Reduced data management cost because data about multiple sets of metrics do not have to be stored and retrieved.The panel will identify the types of applications and projects where this approach has been successful and where it has not, using product and process data from projects at Microsoft, Hewlett-Packard, Nortel, IE3M Global Services, and the Space Shuttle. The conditions that must exist for this approach to be successful, such as process maturity and stability, type of application, and types of product and process metrics, will also be covered.The panel will be stimulating and provocative because the panelists will take opposite positions on whether the approach is applicable to a variety of applications and projects or is restricted to special situations. Interaction with the audience will be a key element of the panel session. This will include the audience's experiences with the approach and their opinions about the question posed by the panel: Can Metrics and Models be Applied Across Multiple Releases or Projects?","PeriodicalId":372331,"journal":{"name":"Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132440297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}