{"title":"三输入三模型机器学习系统的可靠性模型与分析","authors":"Qiang Wen, F. Machida","doi":"10.1109/DSC54232.2022.9888825","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) models have been widely applied to real-world systems. However, outputs of ML models are generally uncertain and sensitive to real input data, which is a big challenge in designing highly reliable ML-based software systems. Our study aims to improve the ML system reliability through a software architecture approach inspired by N-version programming. N-version ML architectures considered in our study combine multiple input data sets with multiple versions of ML models to determine the final system output by consensus. In this paper, we focus on three-version ML architectures and propose the reliability models for analyzing the system reliability by using diversity metrics for ML models and input data sets. The proposed model allows us to compare the reliability of a triple-model with triple-input (TMTI) architecture with other variants of three-version and two-version architectures. Through the numerical analysis of the proposed models, we find that i) the reliability of TMTI architecture is higher than other three-version architectures, but interestingly ii) it is generally lower than the reliability of double model with double input system (DMDI). Furthermore, we also find that a larger variance of model diversities negatively impacts the TMTI reliability, while a larger variance of input diversity has opposed impacts.","PeriodicalId":368903,"journal":{"name":"2022 IEEE Conference on Dependable and Secure Computing (DSC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Reliability Models and Analysis for Triple-model with Triple-input Machine Learning Systems\",\"authors\":\"Qiang Wen, F. Machida\",\"doi\":\"10.1109/DSC54232.2022.9888825\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning (ML) models have been widely applied to real-world systems. However, outputs of ML models are generally uncertain and sensitive to real input data, which is a big challenge in designing highly reliable ML-based software systems. Our study aims to improve the ML system reliability through a software architecture approach inspired by N-version programming. N-version ML architectures considered in our study combine multiple input data sets with multiple versions of ML models to determine the final system output by consensus. In this paper, we focus on three-version ML architectures and propose the reliability models for analyzing the system reliability by using diversity metrics for ML models and input data sets. The proposed model allows us to compare the reliability of a triple-model with triple-input (TMTI) architecture with other variants of three-version and two-version architectures. Through the numerical analysis of the proposed models, we find that i) the reliability of TMTI architecture is higher than other three-version architectures, but interestingly ii) it is generally lower than the reliability of double model with double input system (DMDI). Furthermore, we also find that a larger variance of model diversities negatively impacts the TMTI reliability, while a larger variance of input diversity has opposed impacts.\",\"PeriodicalId\":368903,\"journal\":{\"name\":\"2022 IEEE Conference on Dependable and Secure Computing (DSC)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Conference on Dependable and Secure Computing (DSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DSC54232.2022.9888825\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Conference on Dependable and Secure Computing (DSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSC54232.2022.9888825","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Reliability Models and Analysis for Triple-model with Triple-input Machine Learning Systems
Machine learning (ML) models have been widely applied to real-world systems. However, outputs of ML models are generally uncertain and sensitive to real input data, which is a big challenge in designing highly reliable ML-based software systems. Our study aims to improve the ML system reliability through a software architecture approach inspired by N-version programming. N-version ML architectures considered in our study combine multiple input data sets with multiple versions of ML models to determine the final system output by consensus. In this paper, we focus on three-version ML architectures and propose the reliability models for analyzing the system reliability by using diversity metrics for ML models and input data sets. The proposed model allows us to compare the reliability of a triple-model with triple-input (TMTI) architecture with other variants of three-version and two-version architectures. Through the numerical analysis of the proposed models, we find that i) the reliability of TMTI architecture is higher than other three-version architectures, but interestingly ii) it is generally lower than the reliability of double model with double input system (DMDI). Furthermore, we also find that a larger variance of model diversities negatively impacts the TMTI reliability, while a larger variance of input diversity has opposed impacts.