{"title":"项目结果的多标准模拟","authors":"D. Bodner, J. Wade","doi":"10.1109/SysCon.2013.6549885","DOIUrl":null,"url":null,"abstract":"Programs that develop and deploy complex systems typically have multiple criteria by which they are judged to be successful. Categories of such criteria include schedule, cost, technical system performance, quality and customer expectations. Criteria are operationalized via particular metrics, and often there are complex relationships between metrics, e.g., correlations or trade-offs. In an acquisition program, it is critical that systems engineers understand the implications of their actions and decisions with respect to these metrics, since the metrics are used to report the performance and eventual outcome of the program. However, such understanding usually takes many years of on-the-job experience. This paper describes an approach to simulation modeling of program behavior and performance whereby program outputs expressed in these metrics can be studied by systems engineers. An example program simulation model is presented that currently is used in an educational technology system for training systems engineers. The decisions and actions that can be taken by a systems engineer are described, and the impacts of various actions and decisions on program metrics and metric relationships are illustrated. The model is validated via subject matter experts with extensive experience in the program domain.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Multi-criteria simulation of program outcomes\",\"authors\":\"D. Bodner, J. Wade\",\"doi\":\"10.1109/SysCon.2013.6549885\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Programs that develop and deploy complex systems typically have multiple criteria by which they are judged to be successful. Categories of such criteria include schedule, cost, technical system performance, quality and customer expectations. Criteria are operationalized via particular metrics, and often there are complex relationships between metrics, e.g., correlations or trade-offs. In an acquisition program, it is critical that systems engineers understand the implications of their actions and decisions with respect to these metrics, since the metrics are used to report the performance and eventual outcome of the program. However, such understanding usually takes many years of on-the-job experience. This paper describes an approach to simulation modeling of program behavior and performance whereby program outputs expressed in these metrics can be studied by systems engineers. An example program simulation model is presented that currently is used in an educational technology system for training systems engineers. The decisions and actions that can be taken by a systems engineer are described, and the impacts of various actions and decisions on program metrics and metric relationships are illustrated. The model is validated via subject matter experts with extensive experience in the program domain.\",\"PeriodicalId\":218073,\"journal\":{\"name\":\"2013 IEEE International Systems Conference (SysCon)\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-04-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 IEEE International Systems Conference (SysCon)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SysCon.2013.6549885\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE International Systems Conference (SysCon)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SysCon.2013.6549885","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Programs that develop and deploy complex systems typically have multiple criteria by which they are judged to be successful. Categories of such criteria include schedule, cost, technical system performance, quality and customer expectations. Criteria are operationalized via particular metrics, and often there are complex relationships between metrics, e.g., correlations or trade-offs. In an acquisition program, it is critical that systems engineers understand the implications of their actions and decisions with respect to these metrics, since the metrics are used to report the performance and eventual outcome of the program. However, such understanding usually takes many years of on-the-job experience. This paper describes an approach to simulation modeling of program behavior and performance whereby program outputs expressed in these metrics can be studied by systems engineers. An example program simulation model is presented that currently is used in an educational technology system for training systems engineers. The decisions and actions that can be taken by a systems engineer are described, and the impacts of various actions and decisions on program metrics and metric relationships are illustrated. The model is validated via subject matter experts with extensive experience in the program domain.