{"title":"Source-Aware Partitioning for Robust Cross-Validation","authors":"Ozsel Kilinc, Ismail Uysal","doi":"10.1109/ICMLA.2015.216","DOIUrl":"https://doi.org/10.1109/ICMLA.2015.216","url":null,"abstract":"One of the most critical components of engineering a machine learning algorithm for a live application is robust performance assessment prior to its implementation. Cross-validation is used to forecast a specific algorithm's classification or prediction accuracy on new input data given a finite dataset for training and testing the algorithm. Two most well known cross-validation techniques, random subsampling (RSS) and K-fold, are used to generalize the assessment results of machine learning algorithms in a non-exhaustive random manner. In this work we first show that for an inertia based activity recognition problem where data is collected from different users of a wrist-worn wireless accelerometer, random partitioning of the data, regardless of cross-validation technique, results in statistically similar average accuracies for a standard feed-forward neural network classifier. We propose a novel source-aware partitioning technique where samples from specific users are completely left out of the training/validation sets in rotation. The average error for the proposed cross-validation method is significantly higher with lower standard variation, which is a major indicator of cross-validation robustness. Approximately 30% increase in average error rate implies that source-aware cross validation could be a better indication of live algorithm performance where test data statistics would be significantly different than training data due to source (or user)-sensitive nature of process data.","PeriodicalId":288427,"journal":{"name":"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134111703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Restricted Boltzmann Machine for Nonlinear System Modeling","authors":"E. D. L. Rosa, Wen Yu","doi":"10.1109/ICMLA.2015.24","DOIUrl":"https://doi.org/10.1109/ICMLA.2015.24","url":null,"abstract":"In this paper, we use a deep learning method, restricted Boltzmann machine, for nonlinear system identification. The neural model has deep architecture and is generated by a random search method. The initial weights of this deep neural model are obtained from the restricted Boltzmann machines. To identify nonlinear systems, we propose special unsupervised learning methods with input data. The normal supervised learning is used to train the weights with the output data. The modified algorithm is validated by modeling two benchmark systems.","PeriodicalId":288427,"journal":{"name":"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134393089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resampling-Based Variable Selection with Lasso for p >> n and Partially Linear Models","authors":"Mihaela A. Mares, Yike Guo","doi":"10.1109/ICMLA.2015.134","DOIUrl":"https://doi.org/10.1109/ICMLA.2015.134","url":null,"abstract":"The linear model of the regression function is a widely used and perhaps, in most cases, highly unrealistic simplifying assumption, when proposing consistent variable selection methods for large and highly-dimensional datasets. In this paper, we study what happens from theoretical point of view, when a variable selection method assumes a linear regression function and the underlying ground-truth model is composed of a linear and a non-linear term, that is at most partially linear. We demonstrate consistency of the Lasso method when the model is partially linear. However, we note that the algorithm tends to increase even more the number of selected false positives on partially linear models when given few training samples. That is usually because the values of small groups of samples happen to explain variation coming from the non-linear part of the response function and the noise, using a linear combination of wrong predictors. We demonstrate theoretically that false positives are likely to be selected by the Lasso method due to a small proportion of samples, which happen to explain some variation in the response variable. We show that this property implies that if we run the Lasso on several slightly smaller size data replications, sampled without replacement, and intersect the results, we are likely to reduce the number of false positives without losing already selected true positives. We propose a novel consistent variable selection algorithm based on this property and we show it can outperform other variable selection methods on synthetic datasets of linear and partially linear models and datasets from the UCI machine learning repository.","PeriodicalId":288427,"journal":{"name":"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133716553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acoustic Features for Recognizing Musical Artist Influence","authors":"Brandon G. Morton, Youngmoo E. Kim","doi":"10.1109/ICMLA.2015.136","DOIUrl":"https://doi.org/10.1109/ICMLA.2015.136","url":null,"abstract":"Musicologists have been interested in the topic of influence between composers for years and have developed methods and heuristics for recognizing influence in classical music. While these methods work well for music where the score is the primary source of information, this type of analysis is not well suited for modern popular music where the audio recording itself is arguably the primary representation. This paper presents two audio content-based systems for influence recognition: a system using a spectral representation (Constant-q transform) and support vector machines and another system that obtains features by using a deep belief network and then logistic regression for classification. The system using the spectral representation provides a baseline for future comparisons and evidence to support the idea that influence recognition can be performed using information extracted from the audio signal. The other system attempts to improve performance by using a deep belief network to learn features useful for influence recognition by mapping data extracted from the audio signal to labeled influence data. A dataset of about 77,000 30-second audio clips, consisting of retail previews of popular music tracks was gathered for this work. These songs were chosen from expertly-labeled influence relationship information gathered by the editors of the AllMusic guide.","PeriodicalId":288427,"journal":{"name":"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115520106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. M. Najafabadi, T. Khoshgoftaar, Chad L. Calvert, Clifford Kemp
{"title":"Detection of SSH Brute Force Attacks Using Aggregated Netflow Data","authors":"M. M. Najafabadi, T. Khoshgoftaar, Chad L. Calvert, Clifford Kemp","doi":"10.1109/ICMLA.2015.20","DOIUrl":"https://doi.org/10.1109/ICMLA.2015.20","url":null,"abstract":"The SSH Brute force attack is one of the most prevalent attacks in computer networks. These attacks aim to gain ineligible access to users' accounts by trying plenty of different password combinations. The detection of this type of attack at the network level can overcome the scalability issue of host-based detection methods. In this paper, we provide a machine learning approach for the detection of SSH brute force attacks at the network level. Since extracting discriminative features for any machine learning task is a fundamental step, we explain the process of extracting discriminative features for the detection of brute force attacks. We incorporate domain knowledge about SSH brute force attacks as well as the analysis of a representative collection of the data to define the features. We collected real SSH traffic from a campus network. We also generated some failed login data that a legitimate user who has forgotten his/her password can produce as normal traffic that can be similar to the SSH brute force attack traffic. Our inspection on the collected brute force Netflow data and the manually produced SSH failed login data showed that the Netflow features are not discriminative enough to discern brute force traffic from the failed login traffic produced by a legitimate user. We introduced an aggregation of Netflows to extract the proper features for building machine learning models. Our results show that the models built upon these features provide excellent performances for the detection of brute force attacks.","PeriodicalId":288427,"journal":{"name":"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115941569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Non-parametric Hidden Markov Clustering Model with Applications to Time Varying User Activity Analysis","authors":"Wutao Wei, Chuanhai Liu, M. Zhu, S. Matei","doi":"10.1109/ICMLA.2015.200","DOIUrl":"https://doi.org/10.1109/ICMLA.2015.200","url":null,"abstract":"Activity data of individual users on social media are easily accessible in this big data era. However, proper modeling strategies for user profiles have not been well developed in the literature. Existing methods or models usually have two limitations. The first limitation is that most methods target the population rather than individual users, and the second is that they cannot model non-stationary time-varying patterns. Different users in general demonstrate different activity modes on social media. Therefore, one population model may fail to characterize activities of individual users. Furthermore, online social media are dynamic and ever evolving, so are users' activities. Dynamic models are needed to properly model users' activities. In this paper, we introduce a non-parametric hidden Markov model to characterize the time-varying activities of social media users. An EM algorithm has been developed to estimate the parameters of the proposed model. In addition, based on the proposed model, we develop a clustering method to group users with similar activity patterns.","PeriodicalId":288427,"journal":{"name":"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116404905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingjing Li, V. Sheng, Zhenyu Shu, Yanxia Cheng, Yuqin Jin, Yuan-feng Yan
{"title":"Learning from the Crowd with Neural Network","authors":"Jingjing Li, V. Sheng, Zhenyu Shu, Yanxia Cheng, Yuqin Jin, Yuan-feng Yan","doi":"10.1109/ICMLA.2015.14","DOIUrl":"https://doi.org/10.1109/ICMLA.2015.14","url":null,"abstract":"In general, the first step for supervised learning from crowdsourced data is integration. To obtain training data as traditional machine learning, the ground truth for each example in the crowdsourcing dataset must be integrated with consensus algorithms. However, some information and correlations among labels in the crowdsourcing dataset have discarded after integration. In order to study whether the information and correlations are useful for learning, we proposed three types of neural networks. Experimental results show that i) all the three types of neural networks have abilities to predict labels for future unseen examples, ii) when labelers have lower qualities, the information and correlations in crowdsourcing datasets, which are discarded by integration, does improve the performance of neural networks significantly, iii) when labelers have higher label qualities, the information and correlations have little impact on improving accuracy of neural networks.","PeriodicalId":288427,"journal":{"name":"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116806923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Application of Classification and Class Decomposition to Use Case Point Estimation Method","authors":"Mohammad Azzeh, A. B. Nassif, Shadi Banitaan","doi":"10.1109/ICMLA.2015.105","DOIUrl":"https://doi.org/10.1109/ICMLA.2015.105","url":null,"abstract":"Use Case Points (UCP) estimation method describes the process of computing the software project size and productivity from use case diagram elements. These metrics are then used to predict the project effort at early stage of software development. The main challenges with previous models are that they were constructed based on a very limited number of observations, and using limited productivity ratios. This paper presents a new approach to predict productivity from UCP environmental factors by applying classification with decomposition technique. A class decomposition provides a number of advantages to supervised learning algorithms through segmenting classes into more homogenous classes, and therefore, increase their diversity. The proposed model is constructed and validated over two datasets that have relatively sufficient number of observations. The accuracy results are promising and have potential to increase accuracy of early effort estimation.","PeriodicalId":288427,"journal":{"name":"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128357967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Topic Labeling Using Ontology-Based Topic Models","authors":"M. Allahyari, K. Kochut","doi":"10.1109/ICMLA.2015.88","DOIUrl":"https://doi.org/10.1109/ICMLA.2015.88","url":null,"abstract":"Topic models, which frequently represent topics as multinomial distributions over words, have been extensively used for discovering latent topics in text corpora. Topic labeling, which aims to assign meaningful labels for discovered topics, has recently gained significant attention. In this paper, we argue that the quality of topic labeling can be improved by considering ontology concepts rather than words alone, in contrast to previous works in this area, which usually represent topics via groups of words selected from topics. We have created: (1) a topic model that integrates ontological concepts with topic models in a single framework, where each topic and each concept are represented as a multinomial distribution over concepts and over words, respectively, and (2) a topic labeling method based on the ontological meaning of the concepts included in the discovered topics. In selecting the best topic labels, we rely on the semantic relatedness of the concepts and their ontological classifications. The results of our experiments conducted on two different data sets show that introducing concepts as additional, richer features between topics and words and describing topics in terms of concepts offers an effective method for generating meaningful labels for the discovered topics.","PeriodicalId":288427,"journal":{"name":"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128657880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting Vulnerable Software Components through N-Gram Analysis and Statistical Feature Selection","authors":"Yulei Pang, Xiaozhen Xue, A. Namin","doi":"10.1109/ICMLA.2015.99","DOIUrl":"https://doi.org/10.1109/ICMLA.2015.99","url":null,"abstract":"Vulnerabilities need to be detected and removed from software. Although previous studies demonstrated the usefulness of employing prediction techniques in deciding about vulnerabilities of software components, the accuracy and improvement of effectiveness of these prediction techniques is still a grand challenging research question. This paper proposes a hybrid technique based on combining N-gram analysis and feature selection algorithms for predicting vulnerable software components where features are defined as continuous sequences of token in source code files, i.e., Java class file. Machine learning-based feature selection algorithms are then employed to reduce the feature and search space. We evaluated the proposed technique based on some Java Android applications, and the results demonstrated that the proposed technique could predict vulnerable classes, i.e., software components, with high precision, accuracy and recall.","PeriodicalId":288427,"journal":{"name":"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129837260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}