Ashley Ramsey, Yonas Kassa, Akshay Kale, Robin Gandhi, Brian Ricks
{"title":"Toward Interactive Visualizations for Explaining Machine Learning Models","authors":"Ashley Ramsey, Yonas Kassa, Akshay Kale, Robin Gandhi, Brian Ricks","doi":"10.59297/enji5258","DOIUrl":null,"url":null,"abstract":"Researchers and end users generally demand more trust and transparency from Machine learning (ML) models due to the complexity of their learned rule spaces. The field of eXplainable Artificial Intelligence (XAI) seeks to rectify this problem by developing methods of explaining ML models and the attributes used in making inferences. In the area of structural health monitoring of bridges, machine learning can offer insight into the relation between a bridge’s conditions and its environment over time. In this paper, we describe three visualization techniques that explain decision tree (DT) ML models that identify which features of a bridge make it more likely to receive repairs. Each of these visualizations enable interpretation, exploration, and clarification of complex DT models. We outline the development of these visualizations, along with their validity by experts in AI and in bridge design and engineering. This work has inherent benefits in the field of XAI as a direction for future research and as a tool for interactive visual explanation of ML models.","PeriodicalId":254795,"journal":{"name":"Proceedings of the 20th International Conference on Information Systems for Crisis Response and Management","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 20th International Conference on Information Systems for Crisis Response and Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.59297/enji5258","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Researchers and end users generally demand more trust and transparency from Machine learning (ML) models due to the complexity of their learned rule spaces. The field of eXplainable Artificial Intelligence (XAI) seeks to rectify this problem by developing methods of explaining ML models and the attributes used in making inferences. In the area of structural health monitoring of bridges, machine learning can offer insight into the relation between a bridge’s conditions and its environment over time. In this paper, we describe three visualization techniques that explain decision tree (DT) ML models that identify which features of a bridge make it more likely to receive repairs. Each of these visualizations enable interpretation, exploration, and clarification of complex DT models. We outline the development of these visualizations, along with their validity by experts in AI and in bridge design and engineering. This work has inherent benefits in the field of XAI as a direction for future research and as a tool for interactive visual explanation of ML models.