Anirban I Ghosh, Radhika Sharma, Karan Goyal, Balakarthikeyan Rajan, S. Mani
{"title":"Health Assurance: AI Model Monitoring Platform","authors":"Anirban I Ghosh, Radhika Sharma, Karan Goyal, Balakarthikeyan Rajan, S. Mani","doi":"10.1145/3564121.3564798","DOIUrl":null,"url":null,"abstract":"Businesses are increasingly reliant on Machine Learning models to manage user experiences. It becomes important to not only focus on building robust and state-of-the-art models but also continuously monitor and evaluate them. Continuous monitoring enables the AI team to ensure the right frequency of model training and pro-actively investigate erroneous patterns and predictions, before it has a wider business impact. A robust and effective monitoring system is thus needed to ensure business and engineering teams are aware of model performance and any data anomalies which could impact downstream model accuracy. In this paper, we present our Health Assurance model monitoring solution. Currently, the system serves the health monitoring needs of more than 250 models across 11 AI verticals with an average anomaly detection precision of 60%.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"134 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Second International Conference on AI-ML Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3564121.3564798","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Businesses are increasingly reliant on Machine Learning models to manage user experiences. It becomes important to not only focus on building robust and state-of-the-art models but also continuously monitor and evaluate them. Continuous monitoring enables the AI team to ensure the right frequency of model training and pro-actively investigate erroneous patterns and predictions, before it has a wider business impact. A robust and effective monitoring system is thus needed to ensure business and engineering teams are aware of model performance and any data anomalies which could impact downstream model accuracy. In this paper, we present our Health Assurance model monitoring solution. Currently, the system serves the health monitoring needs of more than 250 models across 11 AI verticals with an average anomaly detection precision of 60%.