{"title":"从高级软件架构描述生成分布式系统测试平台","authors":"J. Grundy, Yuhong Cai, Anna Liu","doi":"10.1109/ASE.2001.989805","DOIUrl":null,"url":null,"abstract":"Most distributed system specifications have performance benchmark requirements. However, determining the likely performance of complex distributed system architectures during development is very challenging. We describe a system where software architects sketch an outline of their proposed system architecture at a high level of abstraction, including indicating client requests, server services, and choosing particular kinds of middleware and database technologies. A fully working implementation of this system is then automatically generated, allowing multiple clients and servers to be run. Performance tests are then automatically run for this generated code and results are displayed back in the original high-level architectural diagrams. Architects may change performance parameters and architecture characteristics, comparing multiple test run results to determine the most suitable abstractions to refine to detailed designs for actual system implementation. We demonstrate the utility of this approach and the accuracy of our generated performance test-beds for validating architectural choices during early system development.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"43","resultStr":"{\"title\":\"Generation of distributed system test-beds from high-level software architecture descriptions\",\"authors\":\"J. Grundy, Yuhong Cai, Anna Liu\",\"doi\":\"10.1109/ASE.2001.989805\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most distributed system specifications have performance benchmark requirements. However, determining the likely performance of complex distributed system architectures during development is very challenging. We describe a system where software architects sketch an outline of their proposed system architecture at a high level of abstraction, including indicating client requests, server services, and choosing particular kinds of middleware and database technologies. A fully working implementation of this system is then automatically generated, allowing multiple clients and servers to be run. Performance tests are then automatically run for this generated code and results are displayed back in the original high-level architectural diagrams. Architects may change performance parameters and architecture characteristics, comparing multiple test run results to determine the most suitable abstractions to refine to detailed designs for actual system implementation. We demonstrate the utility of this approach and the accuracy of our generated performance test-beds for validating architectural choices during early system development.\",\"PeriodicalId\":433615,\"journal\":{\"name\":\"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2001-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"43\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASE.2001.989805\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASE.2001.989805","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generation of distributed system test-beds from high-level software architecture descriptions
Most distributed system specifications have performance benchmark requirements. However, determining the likely performance of complex distributed system architectures during development is very challenging. We describe a system where software architects sketch an outline of their proposed system architecture at a high level of abstraction, including indicating client requests, server services, and choosing particular kinds of middleware and database technologies. A fully working implementation of this system is then automatically generated, allowing multiple clients and servers to be run. Performance tests are then automatically run for this generated code and results are displayed back in the original high-level architectural diagrams. Architects may change performance parameters and architecture characteristics, comparing multiple test run results to determine the most suitable abstractions to refine to detailed designs for actual system implementation. We demonstrate the utility of this approach and the accuracy of our generated performance test-beds for validating architectural choices during early system development.