Tobias Baur, Alexander Heimerl, F. Lingenfelser, E. André
{"title":"I see what you did there: Understanding when to trust a ML model with NOVA","authors":"Tobias Baur, Alexander Heimerl, F. Lingenfelser, E. André","doi":"10.1109/ACIIW.2019.8925214","DOIUrl":null,"url":null,"abstract":"In this demo paper we present NOVA, a machine learning and explanation interface that focuses on the automated analysis of social interactions. NOVA combines Cooperative Machine Learning (CML) and explainable AI (XAI) methods to reduce manual labelling efforts while simultaneously generating an intuitive understanding of the learning process of a classification system. Therefore, NOVA features a semi-automated labelling process in which users are provided with immediate visual feedback on the predictions, which gives insights into the strengths and weaknesses of the underlying classification system. Following an interactive and exploratory workflow, the performance of the model can be improved by manual revision of the predictions.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACIIW.2019.8925214","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this demo paper we present NOVA, a machine learning and explanation interface that focuses on the automated analysis of social interactions. NOVA combines Cooperative Machine Learning (CML) and explainable AI (XAI) methods to reduce manual labelling efforts while simultaneously generating an intuitive understanding of the learning process of a classification system. Therefore, NOVA features a semi-automated labelling process in which users are provided with immediate visual feedback on the predictions, which gives insights into the strengths and weaknesses of the underlying classification system. Following an interactive and exploratory workflow, the performance of the model can be improved by manual revision of the predictions.