{"title":"在软件工程实验室中评估和比较软件度量","authors":"V. Basili, T. Phillips","doi":"10.1145/800003.807913","DOIUrl":null,"url":null,"abstract":"There has appeared in the literature a great number of metrics that attempt to measure the effort or complexity in developing and understanding software(1). There have also been several attempts to independently validate these measures on data from different organizations gathered by different people(2). These metrics have many purposes. They can be used to evaluate the software development process or the software product. They can be used to estimate the cost and quality of the product. They can also be used during development and evolution of the software to monitor the stability and quality of the product.\n Among the most popular metrics have been the software science metrics of Halstead, and the cyclomatic complexity metric of McCabe. One question is whether these metrics actually measure such things as effort and complexity. One measure of effort may be the time required to produce a product. One measure of complexity might be the number of errors made during the development of a product. A second question is how these metrics compare with standard size measures, such as the number of source lines or the number of executable statements, i.e., do they do a better job of predicting the effort or the number of errors? Lastly, how do these metrics relate to each other?","PeriodicalId":262059,"journal":{"name":"Measurement and evaluation of software quality","volume":"81 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1981-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"41","resultStr":"{\"title\":\"Evaluating and comparing software metrics in the software engineering laboratory\",\"authors\":\"V. Basili, T. Phillips\",\"doi\":\"10.1145/800003.807913\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There has appeared in the literature a great number of metrics that attempt to measure the effort or complexity in developing and understanding software(1). There have also been several attempts to independently validate these measures on data from different organizations gathered by different people(2). These metrics have many purposes. They can be used to evaluate the software development process or the software product. They can be used to estimate the cost and quality of the product. They can also be used during development and evolution of the software to monitor the stability and quality of the product.\\n Among the most popular metrics have been the software science metrics of Halstead, and the cyclomatic complexity metric of McCabe. One question is whether these metrics actually measure such things as effort and complexity. One measure of effort may be the time required to produce a product. One measure of complexity might be the number of errors made during the development of a product. A second question is how these metrics compare with standard size measures, such as the number of source lines or the number of executable statements, i.e., do they do a better job of predicting the effort or the number of errors? Lastly, how do these metrics relate to each other?\",\"PeriodicalId\":262059,\"journal\":{\"name\":\"Measurement and evaluation of software quality\",\"volume\":\"81 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1981-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"41\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Measurement and evaluation of software quality\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/800003.807913\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Measurement and evaluation of software quality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/800003.807913","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluating and comparing software metrics in the software engineering laboratory
There has appeared in the literature a great number of metrics that attempt to measure the effort or complexity in developing and understanding software(1). There have also been several attempts to independently validate these measures on data from different organizations gathered by different people(2). These metrics have many purposes. They can be used to evaluate the software development process or the software product. They can be used to estimate the cost and quality of the product. They can also be used during development and evolution of the software to monitor the stability and quality of the product.
Among the most popular metrics have been the software science metrics of Halstead, and the cyclomatic complexity metric of McCabe. One question is whether these metrics actually measure such things as effort and complexity. One measure of effort may be the time required to produce a product. One measure of complexity might be the number of errors made during the development of a product. A second question is how these metrics compare with standard size measures, such as the number of source lines or the number of executable statements, i.e., do they do a better job of predicting the effort or the number of errors? Lastly, how do these metrics relate to each other?