{"title":"Tools and Visualizations for Exploring Classification Landscapes","authors":"William Powers, Lin Shi, L. Liebovitch","doi":"10.1109/CISS56502.2023.10089673","DOIUrl":null,"url":null,"abstract":"Neural networks and deep learning systems find the correct classification of input data by locating the corresponding local minima in the hyper-dimensional, classification landscape. An increasing number of adversarial examples have now shown that these networks sometimes find an unexpected and incorrect minimum and so make an incorrect classification. To understand those results requires a better understanding of the nature of these classification landscapes. Previous studies have explored the properties of the landscape of back propagation in training these networks. In our studies here, we explore the classification landscape of already trained networks. We present some novel procedures and analytical tools to study the classification land-scape and visualizations to meaningfully represent those results. We apply these methods to study the classification landscape in classic examples, including image classification in the MNIST data set and flower classification from numerical feature values in the Iris data set.","PeriodicalId":243775,"journal":{"name":"2023 57th Annual Conference on Information Sciences and Systems (CISS)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 57th Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS56502.2023.10089673","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Neural networks and deep learning systems find the correct classification of input data by locating the corresponding local minima in the hyper-dimensional, classification landscape. An increasing number of adversarial examples have now shown that these networks sometimes find an unexpected and incorrect minimum and so make an incorrect classification. To understand those results requires a better understanding of the nature of these classification landscapes. Previous studies have explored the properties of the landscape of back propagation in training these networks. In our studies here, we explore the classification landscape of already trained networks. We present some novel procedures and analytical tools to study the classification land-scape and visualizations to meaningfully represent those results. We apply these methods to study the classification landscape in classic examples, including image classification in the MNIST data set and flower classification from numerical feature values in the Iris data set.