IEEE METRICSPub Date : 2005-09-19DOI: 10.1109/METRICS.2005.27
C. Ghezzi
{"title":"Flexible Processes for Evolvable Products","authors":"C. Ghezzi","doi":"10.1109/METRICS.2005.27","DOIUrl":"https://doi.org/10.1109/METRICS.2005.27","url":null,"abstract":"Summary form only given. Software has been evolving from monolithic, centralized, static structures to modular, distributed, and dynamic ones, both at the process and at the product level. The market demands flexible, adaptable, reliable, and evolvable lean software development, which can respond faster to customers' needs. Rather than being developed by a single organization, it is built by federating parts developed by different organizations. Likewise, software products are increasingly created and evolved by assembling individual software components and services that can be discovered and combined dynamically. In extreme cases, the traditional sharp distinction between a static phase, in which software is designed, composed, validated, and deployed and run-time execution, in which a carefully defined and immutable system is run completely disappears. Software may evolve dynamically while it is running, through a variety of mechanisms that include dynamic discovery, negotiation, and binding. The talk identifies the main drivers of this evolution, its milestones, and the challenges to quality requirements of the resulting processes and products. These can be the premise for a research agenda of the software engineering community","PeriodicalId":282231,"journal":{"name":"IEEE METRICS","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124155627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IEEE METRICSPub Date : 2005-09-19DOI: 10.1109/METRICS.2005.33
M. Zelkowitz, V. Basili, S. Asgari, L. Hochstein, J. Hollingsworth, Taiga Nakamura
{"title":"Measuring Productivity on High Performance Computers","authors":"M. Zelkowitz, V. Basili, S. Asgari, L. Hochstein, J. Hollingsworth, Taiga Nakamura","doi":"10.1109/METRICS.2005.33","DOIUrl":"https://doi.org/10.1109/METRICS.2005.33","url":null,"abstract":"In the high performance computing domain, the speed of execution of a program has typically been the primary performance metric. But productivity is also of concern to high performance computing developers. In this paper we will discuss the problems of defining and measuring productivity for these machines and we develop a model of productivity that includes both a performance component and a component that measures the development time of the program. We ran several experiments using students in high performance courses at several universities, and we report on those results with respect to our model of productivity.","PeriodicalId":282231,"journal":{"name":"IEEE METRICS","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124855128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IEEE METRICSPub Date : 2005-09-19DOI: 10.1109/METRICS.2005.3
L. Buglione, A. Abran
{"title":"A Model for Performance Management and Estimation","authors":"L. Buglione, A. Abran","doi":"10.1109/METRICS.2005.3","DOIUrl":"https://doi.org/10.1109/METRICS.2005.3","url":null,"abstract":"Traditional cost estimation models in software engineering are based on the concept of productivity defined as the ratio of output to input; for instance, detailed software estimation models, such as COCOMO, can take multiple factors into account, but their multipliers lead to a single perspective based on the productivity concept. A less explored relationship in software engineering is the one between productivity and performance. This paper presents some classic concepts on the multidimensionality of performance, and proposes some suggestions to implement multidimensional performance models in software engineering based on certain fundamental concepts from geometry, that is, the QEST/LIME family of models.","PeriodicalId":282231,"journal":{"name":"IEEE METRICS","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129160492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IEEE METRICSPub Date : 2005-09-19DOI: 10.1109/METRICS.2005.22
M. Shepperd
{"title":"Evaluating Software Project Prediction Systems","authors":"M. Shepperd","doi":"10.1109/METRICS.2005.22","DOIUrl":"https://doi.org/10.1109/METRICS.2005.22","url":null,"abstract":"The problem of developing usable software project cost prediction systems is perennial and there are many competing approaches. Consequently, in recent years there have been exhortations to conduct empirically based evaluations in order that our understanding of project prediction might be based upon real world evidence. We now find ourselves in the interesting position of possessing this evidence in abundance. For example, a review of just three software engineering journals identified 50 separate studies and overall several hundred studies have been published. This naturally leads to the next step of needing to construct a body of knowledge, particularly when not all evidence is consistent. This process of forming a body of knowledge is generally referred to as metaanalysis. It is an essential activity if we are to have any hope of making sense of, and utilising, results from our empirical studies. However, it becomes apparent that when systematically combining results many difficulties are encountered","PeriodicalId":282231,"journal":{"name":"IEEE METRICS","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116032612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IEEE METRICSPub Date : 2005-09-19DOI: 10.1109/METRICS.2005.30
M. Y. Liu, I. Traoré
{"title":"Measurement Framework for Software Privilege Protection Based on User Interaction Analysis","authors":"M. Y. Liu, I. Traoré","doi":"10.1109/METRICS.2005.30","DOIUrl":"https://doi.org/10.1109/METRICS.2005.30","url":null,"abstract":"Software security is a complex notion that has to be analyzed from several perspectives. One such perspective is the restriction and protection of software privileges. In other words, a secure software system should be able to prevent misuse of the privileges granted. Privileges are usually protected in software systems by integrating or implementing appropriate security modules or mechanisms. Knowing how system privileges are protected by security mechanisms helps software developers in reducing the security risks underlying software systems. In this paper, we propose a measurement framework to evaluate quantitatively the privilege protections of a software system at the design level. Our analysis is based on modelling and analyzing user interactions based on the so-called User System Interaction Effect (USIE) Model. Specifically we define some measurement abstractions and associated metrics for assessing software privilege protection. We evaluate our framework by conducting an empirical study based on a medical record keeping software system.","PeriodicalId":282231,"journal":{"name":"IEEE METRICS","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126601499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IEEE METRICSPub Date : 2001-04-04DOI: 10.1109/METRICS.2001.10000
S. Pfleeger
{"title":"What Good Are Metrics? The Views of Industry and Academia","authors":"S. Pfleeger","doi":"10.1109/METRICS.2001.10000","DOIUrl":"https://doi.org/10.1109/METRICS.2001.10000","url":null,"abstract":"Other industry panellists to be determined. The panellists will discuss why we should use metrics. Are they good for business? Do they really tell us something about quality and productivity? Are the predictions ever right? Are there metrics that business needs that researchers are not producing? How can we make the reality match the vision?","PeriodicalId":282231,"journal":{"name":"IEEE METRICS","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128359866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IEEE METRICSPub Date : 1997-11-05DOI: 10.1109/METRICS.1997.10005
F. Maurice, A. Benzekri, Y. Raynaud
{"title":"Introducing Rigorous Metrics Specification and Verification into a Measurement Program","authors":"F. Maurice, A. Benzekri, Y. Raynaud","doi":"10.1109/METRICS.1997.10005","DOIUrl":"https://doi.org/10.1109/METRICS.1997.10005","url":null,"abstract":"A methodology including the essential steps for any software measurement activity is described. Based on previous and original works, this methodology allows to specify, verify and validate metrics. Related to a defined goal, metrics are specified using a formal notation and then verified. Validating predictive metrics is a crucial task for any goal aiming at improving software processes or products. Potential issues and precautions that must be taken during the validation phase are presented. In order to illustrate the defined approach, an industrial application of the methodology is presented.","PeriodicalId":282231,"journal":{"name":"IEEE METRICS","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123342778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IEEE METRICSPub Date : 1997-11-05DOI: 10.1109/METRIC.1997.637164
L. Briand, J. Daly, J. Wuest
{"title":"A Unified Framework for Cohesion Measurement","authors":"L. Briand, J. Daly, J. Wuest","doi":"10.1109/METRIC.1997.637164","DOIUrl":"https://doi.org/10.1109/METRIC.1997.637164","url":null,"abstract":"The increasing importance being placed on software measurement has led to an increased amount of research developing new software measures. Given the importance of object-oriented development techniques, one specific area where this has occurred is cohesion measurement in object-oriented systems. However, despite an interesting body of work, there is little understanding of the motivations and empirical hypotheses behind many of these new measures. It is often difficult to determine how such measures relate to one another and for which application they can be used. As a consequence, it is very difficult for practitioners and researchers to obtain a clear picture of the state-of-the-art in order to select or define cohesion measures for object-oriented systems.To help remedy this situation a unified framework, based on the issues discovered in a review of object-oriented cohesion measures, is presented. The unified framework contributes to an increased understanding of the state-of-the-art as it is a mechanism for (i) comparing measures and their potential use, (ii) integrating existing measures which examine the same concepts in different ways, and (iii) facilitating more rigorous decision making regarding the definition of new measures and the selection of existing measures for a specific goal of measurement.","PeriodicalId":282231,"journal":{"name":"IEEE METRICS","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114308958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IEEE METRICSPub Date : 1996-03-25DOI: 10.1109/METRICS.1996.10001
A. Porter
{"title":"\"What Makes Inspections Work?\" Understanding How and Why Different Inspection Methods Impact Effectiveness and Cost","authors":"A. Porter","doi":"10.1109/METRICS.1996.10001","DOIUrl":"https://doi.org/10.1109/METRICS.1996.10001","url":null,"abstract":"For two decades, software inspections have proven effective for detecting defects in software. We have reviewed the different ways sofhyare inspections are done, created a taxonomy of inspection methods, and examined claims about the cost-effectiveness of different methods. We detect a disturbingpattem in the evaluation of inspection methods. Although there is near universal agreement on the effectiveness of software inspection, their economics are uncertain. Our examination of several empirical studies leads us to conclude that the benefits of inspections are often overstated and the costs (especially for large sojtware developments) are understated. Furthermore, some of the most injuential studies establishing these coSsts and benejits are 20 years old now, which leads us to question their relevance to today’s software development processes. Extensive work is needed to determine exactly how, why, and when sojtware inspections work, and whether some defect detection techniques might be more cost-effective than others. In this tutorial we ask some questi0n.s about measuring effectiveness of software inspections and determining how much they really cost when their effect en the rest of the development process is considered. *This work is supported in part by a National Science Foundation Faculty Early Career Development Award CCR-9501354. Mr. Siy was also partly supported by AT&T ‘S Summer Employment Program","PeriodicalId":282231,"journal":{"name":"IEEE METRICS","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114563445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}