{"title":"Testability analysis of a UML class diagram","authors":"B. Baudry, Yves Le Traon, G. Sunyé","doi":"10.1109/METRIC.2002.1011325","DOIUrl":"https://doi.org/10.1109/METRIC.2002.1011325","url":null,"abstract":"Design-for-testability is a very important issue in software engineering. It becomes crucial in the case of OO designs where control flows are generally not hierarchical, but are diffuse and distributed over the whole architecture. We concentrate on detecting, pinpointing and suppressing potential testability weaknesses of a UML class diagram. The attribute significant from design testability is called \"class interaction\": it appears when potentially concurrent client/supplier relationships between classes exist in the system. These interactions point out parts of the design that need to be improved, driving structural modifications or constraint specifications, to reduce the final testing effort.","PeriodicalId":165815,"journal":{"name":"Proceedings Eighth IEEE Symposium on Software Metrics","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130925382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A generic model and tool support for assessing and improving Web processes","authors":"D. Rodríguez, R. Harrison, M. Satpathy","doi":"10.1109/METRIC.2002.1011333","DOIUrl":"https://doi.org/10.1109/METRIC.2002.1011333","url":null,"abstract":"We discuss a generic quality framework, based on a generic model, for evaluating Web processes. The aim is to perform assessment and improvement of web processes by using techniques from empirical software engineering. A web development process can be broadly classified into two almost independent sub-processes: the authoring process (AUTH process) and the process of developing the infrastructure (INF process). The AUTH process concerns the creation and management of the contents of a set of nodes and the way they are linked to produce a web application, whereas the INF development process provides technological support and involves creation of databases, integration of the web application to legacy systems etc. In this paper, we instantiate our generic quality model to the AUTH process and present a measurement framework for this process. We also present a tool support to provide effective guidance to software personnel including developers, managers and quality assurance engineers.","PeriodicalId":165815,"journal":{"name":"Proceedings Eighth IEEE Symposium on Software Metrics","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126544321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An empirical validation of the relationship between the magnitude of relative error and project size","authors":"E. Stensrud, T. Foss, B. Kitchenham, I. Myrtveit","doi":"10.1109/METRIC.2002.1011320","DOIUrl":"https://doi.org/10.1109/METRIC.2002.1011320","url":null,"abstract":"Cost estimates are important deliverables of a software project. Consequently, a number of cost prediction models have been proposed and evaluated. The common evaluation criteria have been MMRE, MdMRE and PRED(k). MRE is the basic metric in these evaluation criteria. The implicit rationale of using a relative error measure like MRE, rather than an absolute one, is presumably to have a measure that is independent of project size. We investigate if this implicit claim holds true for several data sets: Albrecht, Kemerer, Finnish, DMR and Accenture-ERP. The results suggest that MRE is not independent of project size. Rather, MRE is larger for small projects than for large projects. A practical consequence is that a project manager predicting a small project may falsely believe in a too low MRE. Vice versa when predicting a large project. For researchers, it is important to know that MMRE is not an appropriate measure of the expected MRE of small and large projects. We recommend therefore that the data set be partitioned into two or more subsamples and that MMRE is reported per subsample. In the long term, we should consider using other evaluation criteria.","PeriodicalId":165815,"journal":{"name":"Proceedings Eighth IEEE Symposium on Software Metrics","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126564633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}