{"title":"Full interpretable machine learning in 2D with inline coordinates","authors":"B. Kovalerchuk, Hoang Phan","doi":"10.1109/IV53921.2021.00038","DOIUrl":"https://doi.org/10.1109/IV53921.2021.00038","url":null,"abstract":"This paper proposed a new methodology for machine learning in 2-dimensional space (2-D ML) in inline coordinates. It is a full machine learning approach that does not require to deal with n-dimensional data in n-dimensional space. It allows discovering n-D patterns in 2-D space without loss of n-D information using graph representations of n-D data in 2-D. Specifically, it can be done with the inline based coordinates in different modifications, including static and dynamic ones. The classification and regression algorithms based on these inline coordinates were introduced. A successful case study based on a benchmark data demonstrated the feasibility of the approach. This approach helps to consolidate further a whole new area of full 2-D machine learning as a promising ML methodology. It has advantages of abilities to involve actively the end-users into the discovering of models and their justification. Another advantage is providing interpretable ML models.","PeriodicalId":380260,"journal":{"name":"2021 25th International Conference Information Visualisation (IV)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133710355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Context-Sensitive Visualization of Deep Learning Natural Language Processing Models","authors":"A. Dunn, D. Inkpen, Razvan Andonie","doi":"10.1109/IV53921.2021.00035","DOIUrl":"https://doi.org/10.1109/IV53921.2021.00035","url":null,"abstract":"The introduction of Transformer neural networks has changed the landscape of Natural Language Processing (NLP) during the last years. So far, none of the visualization systems has yet managed to examine all the facets of the Transformers. This gave us the motivation of the current work. We propose a novel NLP Transformer context-sensitive visualization method that leverages existing NLP tools to find the most significant groups of tokens (words) that have the greatest effect on the output, thus preserving some context from the original text. The original contribution is a context-aware visualization method of the most influential word combinations with respect to a classifier. This context-sensitive approach leads to heatmaps that include more of the relevant information pertaining to the classification, as well as more accurately highlighting the most important words from the input text. The proposed method uses a dependency parser, a BERT model, and the leave-n-out technique. Experimental results suggest that improved visualizations increase the understanding of the model, and help design models that perform closer to the human level of understanding for these problems.","PeriodicalId":380260,"journal":{"name":"2021 25th International Conference Information Visualisation (IV)","volume":"90 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128043662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}