Liang Qu, Shengke Wang, Na Yang, Long Chen, Lu Liu, Xiaoyan Zhang, Feng Gao, Junyu Dong
{"title":"Improving object detection accuracy with region and regression based deep CNNs","authors":"Liang Qu, Shengke Wang, Na Yang, Long Chen, Lu Liu, Xiaoyan Zhang, Feng Gao, Junyu Dong","doi":"10.1109/SPAC.2017.8304297","DOIUrl":"https://doi.org/10.1109/SPAC.2017.8304297","url":null,"abstract":"Object detection has made great improvements in convolutional neural networks (CNNs), which is the high-capacity visual model that yields hierarchies of discriminative features. Object detection based on CNNs is in general divided into two aspects: region based detection and regression based detection. In this paper, we aim at further advancing object detection performance by properly utilizing the complementary results of those two streams. By investigating errors of several previous state-of-the-art methods about the two streams, we discover that those detection results of two general streams are complementary in object recognition and localization. Region based methods achieve high recall but simultaneously struggle with localization problems, while regression based methods make less localization errors by iteratively regressing the object to target localization. Driven by these observations, we propose two kinds of fusion paradigms to combine the results of those two streams. One is direct fusion utilizing the complementary results of those two streams and adopting non-maximal suppression (NMS) and voting operation to make full use of the results generated by two streams. In addition, considering direct fusion may compromise the original performance of object detections, we also propose another method, modifies voting operation that just refines the box coordinate without having any other impact on the original detections and further boosts the performance by an adding operation. Extensive experiments show that our two ensemble paradigms both boost the state-of-the-art results on Pascal VOC dataset.","PeriodicalId":161647,"journal":{"name":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125069206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-paced learning for multi-modal fusion for alzheimer's disease diagnosis","authors":"Ning Yuan, D. Guan, Qi Zhu, Weiwei Yuan","doi":"10.1109/SPAC.2017.8304253","DOIUrl":"https://doi.org/10.1109/SPAC.2017.8304253","url":null,"abstract":"Alzheimer's disease (AD) is a sort of nervous system disease, and it may cause amnesia and executive dysfunction etc. AD seriously reduces the quality of people's life, so it is very important to improve the diagnosis accuracy of AD in its prodromal stage, mild cognitive impairment (MCI). In recent years, multi-modal methods had been proven to be effective in prediction of AD and MCI by utilizing the complementary information across different modalities in AD data. In this paper, we propose self-paced sample weighting based low-rank representation (SPLRR) to explore the latent correlation across different modalities. By imposing rank minimization on different modalities regression coefficients, we can capture the intrinsic structure among modalities. Meanwhile, we introduce self-paced learning to allot the corresponding weight to samples based on the contribution of each sample to the label in the current modality. Experiments on the Alzheimer's disease Neuroimaging Initiative (ADNI) database show that the SPLRR model obtains the better classification performance than the state-of-the-art methods.","PeriodicalId":161647,"journal":{"name":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125555916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Mou, Long Zhou, Weizhen Chen, Xu Zhao, Yang Liu, Chao Yang
{"title":"The analysis on college students' physical fitness testing data — two cases study","authors":"Yi Mou, Long Zhou, Weizhen Chen, Xu Zhao, Yang Liu, Chao Yang","doi":"10.1109/SPAC.2017.8304285","DOIUrl":"https://doi.org/10.1109/SPAC.2017.8304285","url":null,"abstract":"College students physical fitness test is an important means for physical fitness evaluation. The test includes body mass index(BMI), lung's capacity, 50 and 1000(male)/800(female) meters run, standing long jump, sit and reach, pull-up(male)/sit-up(female). Final result is weighted sum of the seven items. According to national standard of physical fitness for students, the weights are 15%, 15%, 20%, 10%, 20%, 10%, 10%, respectively. We can regard it as a dimensionality reduction process, which reduces the original data to one dimension. Using fixed weights, the results will neglect differences among students in different areas. Therefore, it is important to learn the weights from the data. The learned weights can not only give students a reasonable evaluation of physical ability, but also reflect the characteristics of the samples. In this paper, we present a learning model for the weights of students' physical fitness tests. The solution algorithm is also presented. We then employ proposed method to analyze two data sets, The results demonstrate that the model presented in this paper has advantages for college students physical fitness test data analysis.","PeriodicalId":161647,"journal":{"name":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121210337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Wang, Hiok Chai Quek, A. Tan, Chun-Hui Miao, G. Ng, You Zhou
{"title":"Leveraging the trade-off between accuracy and interpretability in a hybrid intelligent system","authors":"D. Wang, Hiok Chai Quek, A. Tan, Chun-Hui Miao, G. Ng, You Zhou","doi":"10.1109/SPAC.2017.8304250","DOIUrl":"https://doi.org/10.1109/SPAC.2017.8304250","url":null,"abstract":"Neural Fuzzy Inference System (NFIS) is a widely adopted paradigm to develop a data-driven learning system. This hybrid system has been widely adopted due to its accurate reasoning procedure and comprehensible inference rules. Although most NFISs primarily focus on accuracy, we have observed an ever increasing demand on improving the interpretability of NFISs and other types of machine learning systems. In this paper, we illustrate how we leverage the trade-off between accuracy and interpretability in an NFIS called Genetic Algorithm and Rough Set Incorporated Neural Fuzzy Inference System (GARSINFIS). In a nutshell, GARSINFIS self-organizes its network structure with a small set of control parameters and constraints. Moreover, its autonomously generated inference rule base tries to achieve higher interpretability without sacrificing accuracy. Furthermore, we demonstrate different configuration options of GARSINFIS using well-known benchmarking datasets. The performance of GARSINFIS on both accuracy and interpretability is shown to be encouraging when compared against other decision tree, Bayesian, neural and neural fuzzy models.","PeriodicalId":161647,"journal":{"name":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122281028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusion of probabilistic collaborative and sparse representation for robust image classification","authors":"Zhangdan Chi, Shaoning Zeng, Jianping Gou","doi":"10.1109/SPAC.2017.8304347","DOIUrl":"https://doi.org/10.1109/SPAC.2017.8304347","url":null,"abstract":"The image representation model determines the robustness of image classification. The sparse model obtained by Probabilistic Collaborative representation based Classification (ProCRC) calculates the probability that a test sample belongs to the subspace of classes, to find out which class has the most possibility. Previous studies showed that the distances obtained by different models may have some complementary in the image representation. For this motivation, we proposed a novel image classification method that fusing two distances obtained by ProCRC and conventional sparse representation based classification (SRC). Therefore, we named it ProSCRC. In the fusion, a weight factor A was introduced to balance contributions from the two distances. In order to evaluate the robustness, we conducted plenty of experiments on prevailing benchmark databases. The experimental results showed that our method has a higher accuracy in image classification than both ProCRC and SRC.","PeriodicalId":161647,"journal":{"name":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126361673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gradient boosting model for unbalanced quantitative mass spectra quality assessment","authors":"Long Chen, T. Zhang, Tianjun Li","doi":"10.1109/SPAC.2017.8304311","DOIUrl":"https://doi.org/10.1109/SPAC.2017.8304311","url":null,"abstract":"A method for controlling the quality of isotope labeled mass spectra is described here. In such mass spectra, the profiles of labeled (heavy) and unlabeled (light) peptide pairs provide us valuable information about the studied biological samples in different conditions. The core task of quality control in quantitative LC-MS experiment is to filter out low quality spectra or the peptides with error profiles. The most common used method for this problem is training a classifier for the spectra data to separate it into positive (high quality) and negative (low quality) ones. However, the small number of error profiles always makes the training data dominated by the positive samples, i.e., class imbalance problem. So the Syntheic minority over-sampling technique (SMOTE) is employed to handle the unbalanced data and then applied extreme gradient boosting (Xgboost) model as the classifier. We assessed the different heavy-light peptide ratio samples by the trained Xgboost classifier, and found that the SMOTE Xgboost classifier increases the reliability of peptide ratio estimations significantly.","PeriodicalId":161647,"journal":{"name":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126481838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Humanoid robot localization based on hybrid map","authors":"Xiandong Xu, B. Hong, Yi Guan","doi":"10.1109/SPAC.2017.8304331","DOIUrl":"https://doi.org/10.1109/SPAC.2017.8304331","url":null,"abstract":"In this paper we present a hybrid map based localization method for humanoid robot. A indoor hybrid map is created by a humanoid robot NAO with a camera and a laser range finder. A global topological map is constructed by natural landmarks, and local metrical maps are established using improved Rao-Blackwellized particle filter. In addition, we set up auxiliary semantic layer based on QR code. and a unified framework of semantic-topological-metric hybrid map is set up. An accurate localization result was obtained by combine a global localization based on topological map and local positioning using KLD-MCL, Experiments shows the method can meet the requirements of indoor localization for a humanoid robot.","PeriodicalId":161647,"journal":{"name":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128781036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Diversified shared latent structure based localization for blind persons","authors":"Yujin Wang, Dapeng Tao, Weifeng Liu","doi":"10.1109/SPAC.2017.8304284","DOIUrl":"https://doi.org/10.1109/SPAC.2017.8304284","url":null,"abstract":"Indoor localization systems for blind person aims to help visually impaired people localize themselves in indoor environments. Most approaches employ the RGBD camera and LIDAR for accurate localization, yet these devices are not cheap and portable for blind persons. Instead, WiFi signals are quite ubiquitous in most indoor areas, like shopping mall, hospital etc. Therefore, we propose a diversified shared latent variable model that exploits the availability of WiFi for localization. More specifically, the observation spaces in our model, WiFi strength measurements and their corresponding locations, share a single and reduced dimensionality latent space. By building and incorporating a kernel based diversity prior, the learned latent variables are inclined to extract more features of the WiFi signals, such as the coverage area, and thus further enhance the accuracy of localization. The experimental results illustrate our proposed model is accurate and efficient for indoor localization issue.","PeriodicalId":161647,"journal":{"name":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114886506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research and implementation of parallel Lane detection algorithm based on GPU","authors":"Ying Xu, Bin Fang, X. Wu, Weibin Yang","doi":"10.1109/SPAC.2017.8304303","DOIUrl":"https://doi.org/10.1109/SPAC.2017.8304303","url":null,"abstract":"Graphic Processing Unit (GPU) with the powerful computing ability is widely used for Parallel Computing. This paper raised a parallel Lane Detection Algorithm based on GPU acceleration, which could reduce the computing time for processing large amounts of data and solve large-scale complex problems. We implemented Median filter, Differential excitation and Hough transform on compute unified device architecture (CUDA). This algorithm took the advantages of GPU in parallel computation, memory management and reasonably allocated the computational resources and the corresponding computational tasks to the host and device in the Lane Detection. In this paper, different size of the image are processed and the experiment result proved that with the amount of data increases, the GPU acceleration will get good results.","PeriodicalId":161647,"journal":{"name":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133667526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust principal component analysis via joint ℓ2,1-norms minimization","authors":"Shuangyan Yi, Zhenyu He, Wei-Guo Yang","doi":"10.1109/SPAC.2017.8304243","DOIUrl":"https://doi.org/10.1109/SPAC.2017.8304243","url":null,"abstract":"Principal Component Analysis (PCA) is the most widely used unsupervised subspace learning method, and lots of its variants have been developed. With so many proposed PCA-like methods, it is still not clear that which features are better or worse for principal components, especially when the data suffers from outliers. To this end, we propose Robust Principal Component Analysis via joint ℓ2,1-norms minimization, which provides new insights into two crucial issues of PCA: feature selection and robustness to outliers. Unlike other PCA-like methods, the proposed method is able to select effective features for reconstruction by using the ℓ2,1-norm regularization term. More specific, we first use a ℓ2,1-norm based transformation matrix to select effective features that can effectively characterize key components (e.g., the eyes and the nose in a face image), and then use an orthogonal transformation matrix to recover the original data from the selected data representation. In this way, the key components can be well recovered by using the effective features selected by a learned transformation matrix. On the other hand, we also impose ℓ2,1-norm on a loss term to select clean samples to recover its same class samples but with outliers. A simple yet effective optimization algorithm is proposed to solve the resulting optimization problem. Experiments on six datasets demonstrate the effectiveness of the proposed method.","PeriodicalId":161647,"journal":{"name":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132585302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}