{"title":"质量指导编程的过程模型:一种使软件系统定量评估对从业者有用的方法","authors":"S. Biffl, T. Grechenig","doi":"10.5753/sbes.1993.24412","DOIUrl":null,"url":null,"abstract":"Quantitative evaluation of software systems has not yet been accepted by practioners. Early expectations especially into code analysis have not been met so far. Among several reasons for the rare use in practice we suppose a lack of empirical data, a dominant focus in research on formal aspects, as unreasonable embedding in the development process. The following papers deals with more technical reasons: lack of flexibility and usability of code measuring tools. We outline a process model for quality assurance during the coding phase providing human reviews as well as quantitative evaluation. The model is based on the idea of permanently adapting measuring tools to the goals of a project which will result in a metric and review guided coding cycle. The system presented generates software measuring tools providing the necessary flexibility for quick adaptions at hand. The generator is a equipped with a clear separation of language and metric description making both reusable when a new tool design is being generated. Experiments with several commercial programming languages and most classical code metrics proved the claim of flexibility and usability. We postulate that quantitative evaluation can work in practice if metrics, project constraint and management goals are matched within a local process of collecting empirical data.","PeriodicalId":290219,"journal":{"name":"Anais do VII Simpósio Brasileiro de Engenharia de Software (SBES 1993)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1993-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Process Model for Quality guided Programming: An Approach to Making Quantitative Evaluation of Software Systems Useful for Practitioners\",\"authors\":\"S. Biffl, T. Grechenig\",\"doi\":\"10.5753/sbes.1993.24412\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Quantitative evaluation of software systems has not yet been accepted by practioners. Early expectations especially into code analysis have not been met so far. Among several reasons for the rare use in practice we suppose a lack of empirical data, a dominant focus in research on formal aspects, as unreasonable embedding in the development process. The following papers deals with more technical reasons: lack of flexibility and usability of code measuring tools. We outline a process model for quality assurance during the coding phase providing human reviews as well as quantitative evaluation. The model is based on the idea of permanently adapting measuring tools to the goals of a project which will result in a metric and review guided coding cycle. The system presented generates software measuring tools providing the necessary flexibility for quick adaptions at hand. The generator is a equipped with a clear separation of language and metric description making both reusable when a new tool design is being generated. Experiments with several commercial programming languages and most classical code metrics proved the claim of flexibility and usability. We postulate that quantitative evaluation can work in practice if metrics, project constraint and management goals are matched within a local process of collecting empirical data.\",\"PeriodicalId\":290219,\"journal\":{\"name\":\"Anais do VII Simpósio Brasileiro de Engenharia de Software (SBES 1993)\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1993-10-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Anais do VII Simpósio Brasileiro de Engenharia de Software (SBES 1993)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5753/sbes.1993.24412\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Anais do VII Simpósio Brasileiro de Engenharia de Software (SBES 1993)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5753/sbes.1993.24412","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Process Model for Quality guided Programming: An Approach to Making Quantitative Evaluation of Software Systems Useful for Practitioners
Quantitative evaluation of software systems has not yet been accepted by practioners. Early expectations especially into code analysis have not been met so far. Among several reasons for the rare use in practice we suppose a lack of empirical data, a dominant focus in research on formal aspects, as unreasonable embedding in the development process. The following papers deals with more technical reasons: lack of flexibility and usability of code measuring tools. We outline a process model for quality assurance during the coding phase providing human reviews as well as quantitative evaluation. The model is based on the idea of permanently adapting measuring tools to the goals of a project which will result in a metric and review guided coding cycle. The system presented generates software measuring tools providing the necessary flexibility for quick adaptions at hand. The generator is a equipped with a clear separation of language and metric description making both reusable when a new tool design is being generated. Experiments with several commercial programming languages and most classical code metrics proved the claim of flexibility and usability. We postulate that quantitative evaluation can work in practice if metrics, project constraint and management goals are matched within a local process of collecting empirical data.