{"title":"Rare Class Learning","authors":"C. Aggarwal","doi":"10.1201/b17320-18","DOIUrl":"https://doi.org/10.1201/b17320-18","url":null,"abstract":"","PeriodicalId":378937,"journal":{"name":"Data Classification: Algorithms and Applications","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115402599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Support Vector Machines","authors":"Po-Wei Wang, Chih-Jen Lin","doi":"10.1201/b17320-8","DOIUrl":"https://doi.org/10.1201/b17320-8","url":null,"abstract":"The original SVM algorithm was invented by Vladimir N. Vapnik and the current standard incarnation (soft margin) was proposed by Corinna Cortes and Vapnik in 1993 and published in 1995. A support vector machine(SVM) constructs a hyperplane or set of hyperplanes in a highor infinitedimensional space, which can be used for classification, regression, or other tasks. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training data point of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. In this notes, we will explain the intuition and then get the primal problem, and how to translate the primal problem to dual problem. We will apply kernel trick and SMO algorithms to solve the dual problem and get the hyperplane we want to separate the dataset. Give general idea about SVM and introduce the goal of this notes, what kind of problems and knowledge will be covered by this node. In this note, one single SVM model is for two labels classification, whose label is y ∈ {−1, 1}. And the hyperplane we want to find to separate the two classes dataset is h, for which classifier, we use parameters w, b and we write our classifier as hw,b(x) = g(w x+ b) Here, g(z) = 1 if z ≥ 0, and g(z) = −1 otherwise.","PeriodicalId":378937,"journal":{"name":"Data Classification: Algorithms and Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115417767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual Classification","authors":"Giorgio Maria Di Nunzio","doi":"10.1201/b17320-24","DOIUrl":"https://doi.org/10.1201/b17320-24","url":null,"abstract":"","PeriodicalId":378937,"journal":{"name":"Data Classification: Algorithms and Applications","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116654690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Introduction to Data Classification","authors":"C. Aggarwal","doi":"10.1201/b17320-2","DOIUrl":"https://doi.org/10.1201/b17320-2","url":null,"abstract":"","PeriodicalId":378937,"journal":{"name":"Data Classification: Algorithms and Applications","volume":"11 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116895494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shiyu Chang, Wei Han, Xianming Liu, N. Xu, Pooya Khorrami, Thomas S. Huang
{"title":"Multimedia Classification","authors":"Shiyu Chang, Wei Han, Xianming Liu, N. Xu, Pooya Khorrami, Thomas S. Huang","doi":"10.1201/b17320-13","DOIUrl":"https://doi.org/10.1201/b17320-13","url":null,"abstract":"","PeriodicalId":378937,"journal":{"name":"Data Classification: Algorithms and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127891239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uncertain Data Classification","authors":"Reynold Cheng, Yixiang Fang, M. Renz","doi":"10.1201/b17320-17","DOIUrl":"https://doi.org/10.1201/b17320-17","url":null,"abstract":"16.","PeriodicalId":378937,"journal":{"name":"Data Classification: Algorithms and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125402648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Instance-Based Learning: A Survey","authors":"C. Aggarwal","doi":"10.1201/b17320-7","DOIUrl":"https://doi.org/10.1201/b17320-7","url":null,"abstract":"Most classification methods are based on building a model in the training phase, and then using this model for specific test instances, during the actual classification phase. Thus, the classification process is usually a two-phase approach that is cleanly separated between processing training and test instances. As discussed in the introduction chapter of this book, these two phases are as follows: • Training Phase: In this phase, a model is constructed from the training instances. • Testing Phase: In this phase, the model is used to assign a label to an unlabeled test instance. Examples of models that are created during the first phase of training are decision trees, rule-based methods, neural networks, and support vector machines. Thus, the first phase creates pre-compiled abstractions or models for learning tasks. This is also referred to as eager learning, because the models are constructed in an eager way, without waiting for the test instance. In instance-based 157","PeriodicalId":378937,"journal":{"name":"Data Classification: Algorithms and Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127604695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}